Dec 11 18:00:38 crc systemd[1]: Starting Kubernetes Kubelet... Dec 11 18:00:38 crc restorecon[4731]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 18:00:38 crc restorecon[4731]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 18:00:38 crc restorecon[4731]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 11 18:00:39 crc kubenswrapper[4877]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 18:00:39 crc kubenswrapper[4877]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 11 18:00:39 crc kubenswrapper[4877]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 18:00:39 crc kubenswrapper[4877]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 18:00:39 crc kubenswrapper[4877]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 11 18:00:39 crc kubenswrapper[4877]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.068304 4877 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071387 4877 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071399 4877 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071405 4877 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071411 4877 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071416 4877 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071421 4877 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071426 4877 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071430 4877 feature_gate.go:330] unrecognized feature gate: Example Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071435 4877 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071440 4877 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071445 4877 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071449 4877 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071454 4877 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071459 4877 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071466 4877 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071473 4877 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071478 4877 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071491 4877 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071497 4877 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071501 4877 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071506 4877 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071510 4877 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071515 4877 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071519 4877 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071524 4877 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071528 4877 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071532 4877 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071537 4877 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071541 4877 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071546 4877 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071550 4877 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071555 4877 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071559 4877 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071564 4877 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071568 4877 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071573 4877 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071577 4877 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071581 4877 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071586 4877 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071590 4877 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071595 4877 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071599 4877 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071604 4877 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071608 4877 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071612 4877 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071617 4877 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071621 4877 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071626 4877 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071630 4877 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071636 4877 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071642 4877 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071648 4877 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071655 4877 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071660 4877 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071674 4877 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071680 4877 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071685 4877 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071691 4877 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071696 4877 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071701 4877 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071707 4877 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071711 4877 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071718 4877 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071723 4877 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071729 4877 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071734 4877 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071738 4877 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071743 4877 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071747 4877 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071752 4877 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.071756 4877 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072010 4877 flags.go:64] FLAG: --address="0.0.0.0" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072024 4877 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072034 4877 flags.go:64] FLAG: --anonymous-auth="true" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072041 4877 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072048 4877 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072054 4877 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072061 4877 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072068 4877 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072074 4877 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072079 4877 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072085 4877 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072091 4877 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072096 4877 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072102 4877 flags.go:64] FLAG: --cgroup-root="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072107 4877 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072112 4877 flags.go:64] FLAG: --client-ca-file="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072118 4877 flags.go:64] FLAG: --cloud-config="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072125 4877 flags.go:64] FLAG: --cloud-provider="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072130 4877 flags.go:64] FLAG: --cluster-dns="[]" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072136 4877 flags.go:64] FLAG: --cluster-domain="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072141 4877 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072147 4877 flags.go:64] FLAG: --config-dir="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072152 4877 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072158 4877 flags.go:64] FLAG: --container-log-max-files="5" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072164 4877 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072169 4877 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072175 4877 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072180 4877 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072185 4877 flags.go:64] FLAG: --contention-profiling="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072191 4877 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072196 4877 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072201 4877 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072207 4877 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072213 4877 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072218 4877 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072223 4877 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072228 4877 flags.go:64] FLAG: --enable-load-reader="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072233 4877 flags.go:64] FLAG: --enable-server="true" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072238 4877 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072246 4877 flags.go:64] FLAG: --event-burst="100" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072251 4877 flags.go:64] FLAG: --event-qps="50" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072256 4877 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072261 4877 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072266 4877 flags.go:64] FLAG: --eviction-hard="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072275 4877 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072281 4877 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072287 4877 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072292 4877 flags.go:64] FLAG: --eviction-soft="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072297 4877 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072303 4877 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072308 4877 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072313 4877 flags.go:64] FLAG: --experimental-mounter-path="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072318 4877 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072324 4877 flags.go:64] FLAG: --fail-swap-on="true" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072329 4877 flags.go:64] FLAG: --feature-gates="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072336 4877 flags.go:64] FLAG: --file-check-frequency="20s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072341 4877 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072346 4877 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072352 4877 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072357 4877 flags.go:64] FLAG: --healthz-port="10248" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072362 4877 flags.go:64] FLAG: --help="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072367 4877 flags.go:64] FLAG: --hostname-override="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072391 4877 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072397 4877 flags.go:64] FLAG: --http-check-frequency="20s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072402 4877 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072408 4877 flags.go:64] FLAG: --image-credential-provider-config="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072413 4877 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072418 4877 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072424 4877 flags.go:64] FLAG: --image-service-endpoint="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072429 4877 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072433 4877 flags.go:64] FLAG: --kube-api-burst="100" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072438 4877 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072444 4877 flags.go:64] FLAG: --kube-api-qps="50" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072449 4877 flags.go:64] FLAG: --kube-reserved="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072454 4877 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072459 4877 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072466 4877 flags.go:64] FLAG: --kubelet-cgroups="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072471 4877 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072476 4877 flags.go:64] FLAG: --lock-file="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072481 4877 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072486 4877 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072492 4877 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072500 4877 flags.go:64] FLAG: --log-json-split-stream="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072505 4877 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072510 4877 flags.go:64] FLAG: --log-text-split-stream="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072515 4877 flags.go:64] FLAG: --logging-format="text" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072520 4877 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072525 4877 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072530 4877 flags.go:64] FLAG: --manifest-url="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072537 4877 flags.go:64] FLAG: --manifest-url-header="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072544 4877 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072549 4877 flags.go:64] FLAG: --max-open-files="1000000" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072555 4877 flags.go:64] FLAG: --max-pods="110" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072560 4877 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072566 4877 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072571 4877 flags.go:64] FLAG: --memory-manager-policy="None" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072576 4877 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072581 4877 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072586 4877 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072592 4877 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072604 4877 flags.go:64] FLAG: --node-status-max-images="50" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072609 4877 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072614 4877 flags.go:64] FLAG: --oom-score-adj="-999" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072620 4877 flags.go:64] FLAG: --pod-cidr="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072625 4877 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072634 4877 flags.go:64] FLAG: --pod-manifest-path="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072640 4877 flags.go:64] FLAG: --pod-max-pids="-1" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072645 4877 flags.go:64] FLAG: --pods-per-core="0" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072651 4877 flags.go:64] FLAG: --port="10250" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072657 4877 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072664 4877 flags.go:64] FLAG: --provider-id="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072670 4877 flags.go:64] FLAG: --qos-reserved="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072675 4877 flags.go:64] FLAG: --read-only-port="10255" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072680 4877 flags.go:64] FLAG: --register-node="true" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072686 4877 flags.go:64] FLAG: --register-schedulable="true" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072691 4877 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072701 4877 flags.go:64] FLAG: --registry-burst="10" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072706 4877 flags.go:64] FLAG: --registry-qps="5" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072711 4877 flags.go:64] FLAG: --reserved-cpus="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072717 4877 flags.go:64] FLAG: --reserved-memory="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072724 4877 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072729 4877 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072735 4877 flags.go:64] FLAG: --rotate-certificates="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072740 4877 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072746 4877 flags.go:64] FLAG: --runonce="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072752 4877 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072758 4877 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072765 4877 flags.go:64] FLAG: --seccomp-default="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072770 4877 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072775 4877 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072781 4877 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072787 4877 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072792 4877 flags.go:64] FLAG: --storage-driver-password="root" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072798 4877 flags.go:64] FLAG: --storage-driver-secure="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072803 4877 flags.go:64] FLAG: --storage-driver-table="stats" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072809 4877 flags.go:64] FLAG: --storage-driver-user="root" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072814 4877 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072819 4877 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072825 4877 flags.go:64] FLAG: --system-cgroups="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072830 4877 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072839 4877 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072844 4877 flags.go:64] FLAG: --tls-cert-file="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072851 4877 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072857 4877 flags.go:64] FLAG: --tls-min-version="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072863 4877 flags.go:64] FLAG: --tls-private-key-file="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072868 4877 flags.go:64] FLAG: --topology-manager-policy="none" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072874 4877 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072879 4877 flags.go:64] FLAG: --topology-manager-scope="container" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072885 4877 flags.go:64] FLAG: --v="2" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072892 4877 flags.go:64] FLAG: --version="false" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072899 4877 flags.go:64] FLAG: --vmodule="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072905 4877 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.072911 4877 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073049 4877 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073082 4877 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073088 4877 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073094 4877 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073098 4877 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073104 4877 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073109 4877 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073114 4877 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073121 4877 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073126 4877 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073131 4877 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073136 4877 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073141 4877 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073146 4877 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073151 4877 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073156 4877 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073160 4877 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073166 4877 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073172 4877 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073177 4877 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073183 4877 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073189 4877 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073194 4877 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073200 4877 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073204 4877 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073209 4877 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073213 4877 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073218 4877 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073222 4877 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073227 4877 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073231 4877 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073236 4877 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073240 4877 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073246 4877 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073251 4877 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073257 4877 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073262 4877 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073267 4877 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073272 4877 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073276 4877 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073282 4877 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073286 4877 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073291 4877 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073295 4877 feature_gate.go:330] unrecognized feature gate: Example Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073301 4877 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073305 4877 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073309 4877 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073314 4877 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073319 4877 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073324 4877 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073328 4877 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073333 4877 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073337 4877 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073343 4877 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073347 4877 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073352 4877 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073356 4877 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073361 4877 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073365 4877 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073388 4877 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073392 4877 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073397 4877 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073401 4877 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073406 4877 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073410 4877 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073414 4877 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073418 4877 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073422 4877 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073427 4877 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073431 4877 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.073435 4877 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.073443 4877 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.082083 4877 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.082126 4877 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082233 4877 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082244 4877 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082250 4877 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082255 4877 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082260 4877 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082265 4877 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082279 4877 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082286 4877 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082291 4877 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082296 4877 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082301 4877 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082306 4877 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082311 4877 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082333 4877 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082339 4877 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082345 4877 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082350 4877 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082356 4877 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082361 4877 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082366 4877 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082395 4877 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082404 4877 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082412 4877 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082418 4877 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082423 4877 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082429 4877 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082436 4877 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082446 4877 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082452 4877 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082460 4877 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082466 4877 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082472 4877 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082477 4877 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082482 4877 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082487 4877 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082492 4877 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082496 4877 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082501 4877 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082506 4877 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082511 4877 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082516 4877 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082521 4877 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082535 4877 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082540 4877 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082545 4877 feature_gate.go:330] unrecognized feature gate: Example Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082550 4877 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082555 4877 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082560 4877 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082565 4877 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082570 4877 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082575 4877 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082579 4877 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082584 4877 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082589 4877 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082594 4877 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082600 4877 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082606 4877 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082611 4877 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082617 4877 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082623 4877 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082627 4877 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082634 4877 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082640 4877 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082645 4877 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082650 4877 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082655 4877 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082660 4877 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082665 4877 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082669 4877 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082674 4877 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082679 4877 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.082687 4877 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082896 4877 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082906 4877 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082911 4877 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082917 4877 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082922 4877 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082926 4877 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082939 4877 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082949 4877 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082956 4877 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082962 4877 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082967 4877 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082972 4877 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082979 4877 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082985 4877 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082991 4877 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.082997 4877 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083002 4877 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083008 4877 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083013 4877 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083018 4877 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083024 4877 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083029 4877 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083034 4877 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083039 4877 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083044 4877 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083050 4877 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083054 4877 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083059 4877 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083064 4877 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083069 4877 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083074 4877 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083083 4877 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083091 4877 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083099 4877 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083108 4877 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083116 4877 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083125 4877 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083133 4877 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083142 4877 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083149 4877 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083157 4877 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083164 4877 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083189 4877 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083197 4877 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083205 4877 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083213 4877 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083221 4877 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083229 4877 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083236 4877 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083244 4877 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083251 4877 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083259 4877 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083267 4877 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083274 4877 feature_gate.go:330] unrecognized feature gate: Example Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083283 4877 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083292 4877 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083309 4877 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083327 4877 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083338 4877 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083348 4877 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083358 4877 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083368 4877 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083416 4877 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083425 4877 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083432 4877 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083440 4877 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083447 4877 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083455 4877 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083462 4877 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083469 4877 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.083477 4877 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.083489 4877 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.084037 4877 server.go:940] "Client rotation is on, will bootstrap in background" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.091105 4877 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.091441 4877 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.092686 4877 server.go:997] "Starting client certificate rotation" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.092728 4877 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.093400 4877 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-18 19:53:23.078244909 +0000 UTC Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.093573 4877 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.099331 4877 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.101565 4877 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.102573 4877 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.111369 4877 log.go:25] "Validated CRI v1 runtime API" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.132776 4877 log.go:25] "Validated CRI v1 image API" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.134551 4877 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.137768 4877 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-11-17-56-21-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.137795 4877 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.150818 4877 manager.go:217] Machine: {Timestamp:2025-12-11 18:00:39.149621471 +0000 UTC m=+0.175865535 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c213a7b6-d969-4368-856f-6ea24dcb0da0 BootID:0463d847-29f0-4a7f-a5d9-324258f999bf Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:03:4e:ba Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:03:4e:ba Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:35:ec:5f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c7:e5:bb Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d3:ae:4e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b7:8a:bb Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e2:b3:3b:32:8e:63 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:9a:36:eb:da:92:85 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.151002 4877 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.151104 4877 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.151541 4877 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.151729 4877 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.151764 4877 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.151967 4877 topology_manager.go:138] "Creating topology manager with none policy" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.151977 4877 container_manager_linux.go:303] "Creating device plugin manager" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.152165 4877 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.152191 4877 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.152463 4877 state_mem.go:36] "Initialized new in-memory state store" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.152657 4877 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.153512 4877 kubelet.go:418] "Attempting to sync node with API server" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.153529 4877 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.153549 4877 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.153560 4877 kubelet.go:324] "Adding apiserver pod source" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.153578 4877 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.156007 4877 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.157038 4877 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.157142 4877 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.157154 4877 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.157251 4877 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.158181 4877 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.158538 4877 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.159130 4877 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.159154 4877 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.159162 4877 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.159169 4877 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.159179 4877 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.159186 4877 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.159193 4877 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.159203 4877 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.159211 4877 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.159220 4877 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.159230 4877 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.159237 4877 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.159421 4877 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.159877 4877 server.go:1280] "Started kubelet" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.160403 4877 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.160152 4877 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.160805 4877 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.161337 4877 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 11 18:00:39 crc systemd[1]: Started Kubernetes Kubelet. Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.162059 4877 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.162089 4877 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.162098 4877 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 03:39:11.660026719 +0000 UTC Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.162249 4877 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 897h38m32.497780732s for next certificate rotation Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.162359 4877 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.162398 4877 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.162435 4877 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.162506 4877 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.162796 4877 server.go:460] "Adding debug handlers to kubelet server" Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.162972 4877 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="200ms" Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.163408 4877 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.163496 4877 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.163658 4877 factory.go:153] Registering CRI-O factory Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.163676 4877 factory.go:221] Registration of the crio container factory successfully Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.162497 4877 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.103:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18803b1a8d9638b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 18:00:39.159855285 +0000 UTC m=+0.186099329,LastTimestamp:2025-12-11 18:00:39.159855285 +0000 UTC m=+0.186099329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.163766 4877 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.163776 4877 factory.go:55] Registering systemd factory Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.163782 4877 factory.go:221] Registration of the systemd container factory successfully Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.163803 4877 factory.go:103] Registering Raw factory Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.163817 4877 manager.go:1196] Started watching for new ooms in manager Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.164340 4877 manager.go:319] Starting recovery of all containers Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.176954 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.177045 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.177064 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.177078 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.177099 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.177112 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.177130 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.177146 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.177170 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.177182 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.178789 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.178831 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.178847 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.178880 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.178894 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.178914 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.178927 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.178950 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.178967 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.178983 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.179003 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182283 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182312 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182325 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182337 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182350 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182367 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182435 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182452 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182463 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182473 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182486 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182498 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182511 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182523 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182535 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182547 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182561 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182572 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182613 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182624 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182636 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182647 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182658 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182672 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182755 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182776 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182792 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182804 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182831 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182848 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182859 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182881 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182895 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182907 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182919 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182932 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182943 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182954 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182963 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182973 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.182990 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183000 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183016 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183026 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183040 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183050 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183064 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183075 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183090 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183101 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183116 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183132 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183143 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183157 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183166 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183180 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183191 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183205 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183215 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183225 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183236 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183244 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183278 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183288 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183322 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183330 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183339 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183348 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183358 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183384 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183403 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183414 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183426 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183437 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183448 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183459 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183470 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183480 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183489 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183498 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183508 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183519 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183528 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183547 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183559 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183571 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183582 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183593 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183606 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183623 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183684 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183696 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183706 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183722 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183732 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183742 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183753 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183765 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183776 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183788 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183798 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183808 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183819 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183828 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183861 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183873 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183931 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183942 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183954 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183963 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183972 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.183982 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184089 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184099 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184111 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184120 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184130 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184140 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184150 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184159 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184171 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184180 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184190 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184206 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184216 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184230 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184240 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184249 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184259 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184270 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184281 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.184292 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.188080 4877 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.188922 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.188962 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.188985 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189047 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189068 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189090 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189112 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189133 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189156 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189174 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189198 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189223 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189244 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189262 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189284 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189305 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189325 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189344 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189364 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189445 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189466 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189493 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189513 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189535 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189555 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189574 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189598 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189619 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189639 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189658 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189677 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189697 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189715 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189733 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189750 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189776 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189790 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189805 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189822 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189837 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189852 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189875 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189888 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189905 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189926 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189944 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189962 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.189982 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.190001 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.190018 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.190035 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.190057 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.190076 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.190095 4877 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.190113 4877 reconstruct.go:97] "Volume reconstruction finished" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.190126 4877 reconciler.go:26] "Reconciler: start to sync state" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.191995 4877 manager.go:324] Recovery completed Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.200561 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.203769 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.203826 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.203839 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.206756 4877 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.206775 4877 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.206797 4877 state_mem.go:36] "Initialized new in-memory state store" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.210789 4877 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.213975 4877 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.214015 4877 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.214043 4877 kubelet.go:2335] "Starting kubelet main sync loop" Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.214095 4877 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.218694 4877 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.218987 4877 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.262928 4877 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.314466 4877 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.318302 4877 policy_none.go:49] "None policy: Start" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.319493 4877 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.319591 4877 state_mem.go:35] "Initializing new in-memory state store" Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.363968 4877 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.364369 4877 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="400ms" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.371389 4877 manager.go:334] "Starting Device Plugin manager" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.371489 4877 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.371561 4877 server.go:79] "Starting device plugin registration server" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.372032 4877 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.372113 4877 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.372325 4877 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.372448 4877 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.372463 4877 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.380622 4877 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.473167 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.474577 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.474763 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.474886 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.475032 4877 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.475816 4877 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.515145 4877 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.515522 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.516877 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.516924 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.516938 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.517104 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.517488 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.517602 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.518073 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.518198 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.518290 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.518626 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.518737 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.518758 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.518745 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.518798 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.518769 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.521932 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.521962 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.521973 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.522952 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.523515 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.523554 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.523699 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.523972 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.524063 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.524611 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.524643 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.524655 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.524788 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.525055 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.525094 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.525610 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.525640 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.525655 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.525679 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.525693 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.525736 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.525806 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.525824 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.525833 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.525892 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.525915 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.526616 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.526648 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.526659 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594137 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594194 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594234 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594267 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594304 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594344 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594413 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594449 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594481 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594511 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594539 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594568 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594598 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594651 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.594713 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.676459 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.677829 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.677887 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.677906 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.677950 4877 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.678587 4877 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.695925 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696258 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696106 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696329 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696533 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696705 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696700 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696755 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696800 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696839 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696879 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696904 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696920 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696936 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696940 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696964 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696973 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.697024 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.696956 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.697061 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.697071 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.697154 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.697186 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.697224 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.697199 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.697248 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.697255 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.697298 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.697340 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.698877 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: E1211 18:00:39.766290 4877 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="800ms" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.844103 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.857201 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.864913 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.881160 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2785c7ce68dd94d629de298d67698c2023eb0ef1b77ed797e369e0223d1a6f1c WatchSource:0}: Error finding container 2785c7ce68dd94d629de298d67698c2023eb0ef1b77ed797e369e0223d1a6f1c: Status 404 returned error can't find the container with id 2785c7ce68dd94d629de298d67698c2023eb0ef1b77ed797e369e0223d1a6f1c Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.882709 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-35f0e1dac8199c93e19e6edb101f7556c40630aad586e86243ba8d29ed3612e9 WatchSource:0}: Error finding container 35f0e1dac8199c93e19e6edb101f7556c40630aad586e86243ba8d29ed3612e9: Status 404 returned error can't find the container with id 35f0e1dac8199c93e19e6edb101f7556c40630aad586e86243ba8d29ed3612e9 Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.884148 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.893417 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a9f73b0c62a5e4bea706ecd1fdc5980dfd2fe46a0907a23810ca1f2c8b7c0d0d WatchSource:0}: Error finding container a9f73b0c62a5e4bea706ecd1fdc5980dfd2fe46a0907a23810ca1f2c8b7c0d0d: Status 404 returned error can't find the container with id a9f73b0c62a5e4bea706ecd1fdc5980dfd2fe46a0907a23810ca1f2c8b7c0d0d Dec 11 18:00:39 crc kubenswrapper[4877]: I1211 18:00:39.894577 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.903720 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-43fd7177808a136ed621d277927eb0c0d9d0cc5ca43670d83710120d0b35a339 WatchSource:0}: Error finding container 43fd7177808a136ed621d277927eb0c0d9d0cc5ca43670d83710120d0b35a339: Status 404 returned error can't find the container with id 43fd7177808a136ed621d277927eb0c0d9d0cc5ca43670d83710120d0b35a339 Dec 11 18:00:39 crc kubenswrapper[4877]: W1211 18:00:39.926571 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-7aca33c8ff1299fc8be07f1abd4a83c35d8019a0369a58b391cd3539b8b057d4 WatchSource:0}: Error finding container 7aca33c8ff1299fc8be07f1abd4a83c35d8019a0369a58b391cd3539b8b057d4: Status 404 returned error can't find the container with id 7aca33c8ff1299fc8be07f1abd4a83c35d8019a0369a58b391cd3539b8b057d4 Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.079431 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.080710 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.080738 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.080747 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.080770 4877 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 18:00:40 crc kubenswrapper[4877]: E1211 18:00:40.081251 4877 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.162211 4877 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Dec 11 18:00:40 crc kubenswrapper[4877]: W1211 18:00:40.164854 4877 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Dec 11 18:00:40 crc kubenswrapper[4877]: E1211 18:00:40.164944 4877 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Dec 11 18:00:40 crc kubenswrapper[4877]: W1211 18:00:40.198041 4877 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Dec 11 18:00:40 crc kubenswrapper[4877]: E1211 18:00:40.198147 4877 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.223179 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7aca33c8ff1299fc8be07f1abd4a83c35d8019a0369a58b391cd3539b8b057d4"} Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.224309 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"43fd7177808a136ed621d277927eb0c0d9d0cc5ca43670d83710120d0b35a339"} Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.225793 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a9f73b0c62a5e4bea706ecd1fdc5980dfd2fe46a0907a23810ca1f2c8b7c0d0d"} Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.227278 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"35f0e1dac8199c93e19e6edb101f7556c40630aad586e86243ba8d29ed3612e9"} Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.228873 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2785c7ce68dd94d629de298d67698c2023eb0ef1b77ed797e369e0223d1a6f1c"} Dec 11 18:00:40 crc kubenswrapper[4877]: W1211 18:00:40.431478 4877 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Dec 11 18:00:40 crc kubenswrapper[4877]: E1211 18:00:40.431548 4877 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Dec 11 18:00:40 crc kubenswrapper[4877]: W1211 18:00:40.481424 4877 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Dec 11 18:00:40 crc kubenswrapper[4877]: E1211 18:00:40.481529 4877 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Dec 11 18:00:40 crc kubenswrapper[4877]: E1211 18:00:40.567042 4877 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="1.6s" Dec 11 18:00:40 crc kubenswrapper[4877]: E1211 18:00:40.787886 4877 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.103:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18803b1a8d9638b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 18:00:39.159855285 +0000 UTC m=+0.186099329,LastTimestamp:2025-12-11 18:00:39.159855285 +0000 UTC m=+0.186099329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.881784 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.887256 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.887318 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.887339 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:40 crc kubenswrapper[4877]: I1211 18:00:40.887381 4877 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 18:00:40 crc kubenswrapper[4877]: E1211 18:00:40.887945 4877 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.103:6443: connect: connection refused" node="crc" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.162158 4877 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.103:6443: connect: connection refused Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.180707 4877 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 11 18:00:41 crc kubenswrapper[4877]: E1211 18:00:41.182166 4877 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.103:6443: connect: connection refused" logger="UnhandledError" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.234629 4877 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84" exitCode=0 Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.234741 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84"} Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.234787 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.235974 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.236025 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.236046 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.238101 4877 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe" exitCode=0 Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.238189 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe"} Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.238271 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.239580 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.239621 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.239638 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.241052 4877 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2" exitCode=0 Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.241104 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2"} Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.241141 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.242491 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.242547 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.242567 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.242969 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.243352 4877 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6" exitCode=0 Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.243449 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.243479 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6"} Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.243967 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.244014 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.244034 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.244759 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.244784 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.244795 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.252578 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642"} Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.252769 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044"} Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.252674 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.252892 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab"} Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.253153 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc"} Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.253948 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.253998 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:41 crc kubenswrapper[4877]: I1211 18:00:41.254020 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.256679 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7526cf2fa868a4216d213721f626e28ba5a54313d3b602d7f51f83821336bfff"} Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.256724 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.257597 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.257627 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.257639 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.259226 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ab4526853d8e97d7baaf38c3301e8068e57a196e912cefc5db6b261bf79087d3"} Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.259274 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7dec39053fa9ed82953821ecdff4313b3856ec32e4772398c37d7ebcf1833d55"} Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.259290 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f148f68607cf145de77583ba5960130ecdf97055beee224305d3de94958c3353"} Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.259241 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.260023 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.260056 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.260068 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.267170 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.267166 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b"} Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.267335 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433"} Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.267368 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8"} Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.267439 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e"} Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.267465 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e"} Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.267809 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.267839 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.267852 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.269682 4877 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b" exitCode=0 Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.269767 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b"} Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.269830 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.269867 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.270638 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.270664 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.270675 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.270938 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.270989 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.271012 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.357576 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.488901 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.489941 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.489982 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.490001 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:42 crc kubenswrapper[4877]: I1211 18:00:42.490025 4877 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.273981 4877 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f" exitCode=0 Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.274048 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f"} Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.274131 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.274205 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.274259 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.274221 4877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.274413 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.275226 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.275273 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.275285 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.276016 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.276138 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.276248 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.276028 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.276426 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.276459 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.276072 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.276534 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.276547 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:43 crc kubenswrapper[4877]: I1211 18:00:43.331023 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.281591 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a"} Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.281936 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a"} Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.281948 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c"} Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.281957 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913"} Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.281967 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e"} Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.281702 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.281855 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.281735 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.282842 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.282871 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.282884 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.283860 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.283882 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.283895 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.284892 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.284912 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:44 crc kubenswrapper[4877]: I1211 18:00:44.284921 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:45 crc kubenswrapper[4877]: I1211 18:00:45.266528 4877 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 11 18:00:45 crc kubenswrapper[4877]: I1211 18:00:45.285115 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:45 crc kubenswrapper[4877]: I1211 18:00:45.286290 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:45 crc kubenswrapper[4877]: I1211 18:00:45.286336 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:45 crc kubenswrapper[4877]: I1211 18:00:45.286346 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:45 crc kubenswrapper[4877]: I1211 18:00:45.619240 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 11 18:00:45 crc kubenswrapper[4877]: I1211 18:00:45.957596 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:45 crc kubenswrapper[4877]: I1211 18:00:45.957903 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:45 crc kubenswrapper[4877]: I1211 18:00:45.959445 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:45 crc kubenswrapper[4877]: I1211 18:00:45.959503 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:45 crc kubenswrapper[4877]: I1211 18:00:45.959521 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:46 crc kubenswrapper[4877]: I1211 18:00:46.288849 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:46 crc kubenswrapper[4877]: I1211 18:00:46.290025 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:46 crc kubenswrapper[4877]: I1211 18:00:46.290097 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:46 crc kubenswrapper[4877]: I1211 18:00:46.290121 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:47 crc kubenswrapper[4877]: I1211 18:00:47.148556 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:47 crc kubenswrapper[4877]: I1211 18:00:47.148791 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:47 crc kubenswrapper[4877]: I1211 18:00:47.150280 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:47 crc kubenswrapper[4877]: I1211 18:00:47.150338 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:47 crc kubenswrapper[4877]: I1211 18:00:47.150360 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:47 crc kubenswrapper[4877]: I1211 18:00:47.878213 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:47 crc kubenswrapper[4877]: I1211 18:00:47.878532 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:47 crc kubenswrapper[4877]: I1211 18:00:47.879983 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:47 crc kubenswrapper[4877]: I1211 18:00:47.880037 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:47 crc kubenswrapper[4877]: I1211 18:00:47.880056 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:48 crc kubenswrapper[4877]: I1211 18:00:48.066453 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:48 crc kubenswrapper[4877]: I1211 18:00:48.294329 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:48 crc kubenswrapper[4877]: I1211 18:00:48.295609 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:48 crc kubenswrapper[4877]: I1211 18:00:48.295682 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:48 crc kubenswrapper[4877]: I1211 18:00:48.295703 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:49 crc kubenswrapper[4877]: I1211 18:00:49.032543 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:49 crc kubenswrapper[4877]: I1211 18:00:49.297727 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:49 crc kubenswrapper[4877]: I1211 18:00:49.298920 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:49 crc kubenswrapper[4877]: I1211 18:00:49.298952 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:49 crc kubenswrapper[4877]: I1211 18:00:49.298965 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:49 crc kubenswrapper[4877]: E1211 18:00:49.380745 4877 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 18:00:50 crc kubenswrapper[4877]: I1211 18:00:50.889130 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 11 18:00:50 crc kubenswrapper[4877]: I1211 18:00:50.889435 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:50 crc kubenswrapper[4877]: I1211 18:00:50.891214 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:50 crc kubenswrapper[4877]: I1211 18:00:50.891291 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:50 crc kubenswrapper[4877]: I1211 18:00:50.891307 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:50 crc kubenswrapper[4877]: I1211 18:00:50.931673 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:50 crc kubenswrapper[4877]: I1211 18:00:50.932063 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:50 crc kubenswrapper[4877]: I1211 18:00:50.934497 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:50 crc kubenswrapper[4877]: I1211 18:00:50.934578 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:50 crc kubenswrapper[4877]: I1211 18:00:50.934606 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:50 crc kubenswrapper[4877]: I1211 18:00:50.938164 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:51 crc kubenswrapper[4877]: I1211 18:00:51.304751 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:51 crc kubenswrapper[4877]: I1211 18:00:51.305992 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:51 crc kubenswrapper[4877]: I1211 18:00:51.306044 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:51 crc kubenswrapper[4877]: I1211 18:00:51.306070 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:51 crc kubenswrapper[4877]: I1211 18:00:51.312306 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:52 crc kubenswrapper[4877]: I1211 18:00:52.032655 4877 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 18:00:52 crc kubenswrapper[4877]: I1211 18:00:52.032765 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 18:00:52 crc kubenswrapper[4877]: W1211 18:00:52.142400 4877 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 11 18:00:52 crc kubenswrapper[4877]: I1211 18:00:52.142506 4877 trace.go:236] Trace[921813930]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 18:00:42.140) (total time: 10001ms): Dec 11 18:00:52 crc kubenswrapper[4877]: Trace[921813930]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:00:52.142) Dec 11 18:00:52 crc kubenswrapper[4877]: Trace[921813930]: [10.001849966s] [10.001849966s] END Dec 11 18:00:52 crc kubenswrapper[4877]: E1211 18:00:52.142525 4877 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 11 18:00:52 crc kubenswrapper[4877]: I1211 18:00:52.162689 4877 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 11 18:00:52 crc kubenswrapper[4877]: E1211 18:00:52.168938 4877 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 11 18:00:52 crc kubenswrapper[4877]: W1211 18:00:52.179537 4877 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 11 18:00:52 crc kubenswrapper[4877]: I1211 18:00:52.179643 4877 trace.go:236] Trace[1799677779]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 18:00:42.178) (total time: 10001ms): Dec 11 18:00:52 crc kubenswrapper[4877]: Trace[1799677779]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:00:52.179) Dec 11 18:00:52 crc kubenswrapper[4877]: Trace[1799677779]: [10.001326738s] [10.001326738s] END Dec 11 18:00:52 crc kubenswrapper[4877]: E1211 18:00:52.179668 4877 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 11 18:00:52 crc kubenswrapper[4877]: I1211 18:00:52.307211 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:52 crc kubenswrapper[4877]: I1211 18:00:52.308366 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:52 crc kubenswrapper[4877]: I1211 18:00:52.308483 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:52 crc kubenswrapper[4877]: I1211 18:00:52.308505 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:52 crc kubenswrapper[4877]: I1211 18:00:52.358410 4877 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 18:00:52 crc kubenswrapper[4877]: I1211 18:00:52.358481 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 18:00:52 crc kubenswrapper[4877]: E1211 18:00:52.491668 4877 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 11 18:00:52 crc kubenswrapper[4877]: W1211 18:00:52.610937 4877 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 11 18:00:52 crc kubenswrapper[4877]: I1211 18:00:52.611065 4877 trace.go:236] Trace[1982224123]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 18:00:42.609) (total time: 10001ms): Dec 11 18:00:52 crc kubenswrapper[4877]: Trace[1982224123]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:00:52.610) Dec 11 18:00:52 crc kubenswrapper[4877]: Trace[1982224123]: [10.001628784s] [10.001628784s] END Dec 11 18:00:52 crc kubenswrapper[4877]: E1211 18:00:52.611098 4877 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 11 18:00:53 crc kubenswrapper[4877]: W1211 18:00:53.038010 4877 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 11 18:00:53 crc kubenswrapper[4877]: I1211 18:00:53.038102 4877 trace.go:236] Trace[697748343]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 18:00:43.036) (total time: 10001ms): Dec 11 18:00:53 crc kubenswrapper[4877]: Trace[697748343]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:00:53.037) Dec 11 18:00:53 crc kubenswrapper[4877]: Trace[697748343]: [10.001369905s] [10.001369905s] END Dec 11 18:00:53 crc kubenswrapper[4877]: E1211 18:00:53.038124 4877 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 11 18:00:53 crc kubenswrapper[4877]: I1211 18:00:53.481142 4877 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 18:00:53 crc kubenswrapper[4877]: I1211 18:00:53.481254 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 11 18:00:53 crc kubenswrapper[4877]: I1211 18:00:53.488102 4877 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 18:00:53 crc kubenswrapper[4877]: I1211 18:00:53.488191 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 11 18:00:55 crc kubenswrapper[4877]: I1211 18:00:55.691954 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:55 crc kubenswrapper[4877]: I1211 18:00:55.693129 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:55 crc kubenswrapper[4877]: I1211 18:00:55.693163 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:55 crc kubenswrapper[4877]: I1211 18:00:55.693172 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:55 crc kubenswrapper[4877]: I1211 18:00:55.693191 4877 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 18:00:55 crc kubenswrapper[4877]: E1211 18:00:55.702677 4877 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 11 18:00:57 crc kubenswrapper[4877]: I1211 18:00:57.156652 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:57 crc kubenswrapper[4877]: I1211 18:00:57.156809 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:57 crc kubenswrapper[4877]: I1211 18:00:57.157979 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:57 crc kubenswrapper[4877]: I1211 18:00:57.158025 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:57 crc kubenswrapper[4877]: I1211 18:00:57.158035 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:57 crc kubenswrapper[4877]: I1211 18:00:57.162981 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:00:57 crc kubenswrapper[4877]: I1211 18:00:57.322637 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:00:57 crc kubenswrapper[4877]: I1211 18:00:57.324186 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:00:57 crc kubenswrapper[4877]: I1211 18:00:57.324264 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:00:57 crc kubenswrapper[4877]: I1211 18:00:57.324278 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:00:57 crc kubenswrapper[4877]: I1211 18:00:57.674705 4877 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.164091 4877 apiserver.go:52] "Watching apiserver" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.167753 4877 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.168342 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.168769 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.168809 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.168850 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.168919 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.169154 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.169276 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.169420 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.169635 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.169709 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.176962 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.177098 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.177190 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.177274 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.179597 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.180040 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.180145 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.180198 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.180787 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.207396 4877 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.218207 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.231632 4877 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.232181 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.242496 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.253989 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.263922 4877 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.266046 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.278404 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.294399 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.309648 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.478706 4877 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.507417 4877 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.527353 4877 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:32798->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.527477 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:32798->192.168.126.11:17697: read: connection reset by peer" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.527850 4877 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.527882 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.531531 4877 csr.go:261] certificate signing request csr-vqd7q is approved, waiting to be issued Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.562466 4877 csr.go:257] certificate signing request csr-vqd7q is issued Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579249 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579305 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579328 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579350 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579367 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579402 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579418 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579435 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579452 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579467 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579484 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579503 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579520 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579549 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579567 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579589 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579618 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579643 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579665 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579687 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579705 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579726 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579785 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579812 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579833 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579850 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579869 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579887 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579922 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579941 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579966 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579988 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.579993 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580007 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580027 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580046 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580064 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580085 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580083 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580106 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580125 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580142 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580154 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580159 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580205 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580191 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580273 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580227 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580333 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580358 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580403 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580428 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580445 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580465 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580515 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580536 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580551 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580567 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580582 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580612 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580628 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580643 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580658 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580676 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580681 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580737 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580757 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580772 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580788 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580804 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580820 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580835 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580850 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580862 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580868 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580906 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580930 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580948 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580957 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580967 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.580987 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581009 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581028 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581030 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581045 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581052 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581133 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581156 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581164 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581217 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581236 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581246 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581247 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581284 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581302 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581303 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581315 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581319 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581361 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581399 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581421 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581440 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581459 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581478 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581496 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581513 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581530 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581547 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581563 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581579 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581595 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581596 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581613 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581632 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581652 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581669 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581685 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581701 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581717 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581735 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581751 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581769 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581765 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581777 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581783 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581823 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581843 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581846 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581862 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581935 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581935 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581972 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581974 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.581991 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582007 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582012 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582024 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582041 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582058 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582075 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582093 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582112 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582119 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582127 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582160 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582189 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582190 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582212 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582232 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582261 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582276 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582290 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582318 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582341 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582363 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582409 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582428 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582444 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582447 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582476 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582493 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582511 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582530 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582540 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582547 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582587 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582626 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582648 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582673 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582694 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582714 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582733 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582754 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582775 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582795 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582846 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582865 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582887 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582908 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582929 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582947 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582968 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583027 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583049 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583070 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583089 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583110 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583131 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583150 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583169 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583189 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583213 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583243 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583269 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583290 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583309 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583326 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583344 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583363 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584530 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584560 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584580 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584600 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584620 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584640 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584659 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584680 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584698 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584717 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584739 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584760 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584779 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584796 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584815 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584833 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584851 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584868 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584886 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584905 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584925 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584945 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584963 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.584984 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585006 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585029 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585050 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585068 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585112 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585137 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585157 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585177 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585195 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585216 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585239 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585261 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585283 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585312 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585336 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585359 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585413 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585440 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585481 4877 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585499 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585512 4877 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585530 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585542 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585553 4877 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585563 4877 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585574 4877 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585586 4877 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585597 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585608 4877 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585620 4877 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585630 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585641 4877 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585653 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585663 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585674 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585684 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585694 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585705 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585716 4877 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585726 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585738 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585748 4877 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585758 4877 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.587210 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.587223 4877 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.587234 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.587244 4877 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.587256 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.587285 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.587295 4877 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582673 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582786 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582841 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582863 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.582944 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583103 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583406 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583502 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.583841 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585355 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585407 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585890 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.585915 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.586092 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.586094 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.586242 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.586242 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.586368 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.586409 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.586425 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.586506 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.586610 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.586651 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.594366 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.594410 4877 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.594477 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.594521 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.595028 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.595103 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.595236 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.595455 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.586818 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.586875 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.595569 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.586924 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.587146 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.587876 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.588059 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.588122 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.588135 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.588287 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.588571 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.588774 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:00:59.088728305 +0000 UTC m=+20.114972349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.589066 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.589176 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.589922 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.590410 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.590471 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.590602 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.590752 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.590913 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.590934 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.591075 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.591243 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.591259 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.591253 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.591565 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.591566 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.591611 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.591658 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.592464 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.592709 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.592925 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.592916 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.592942 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.592974 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.593042 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.593168 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.595495 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.595639 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.596398 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.596712 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.596955 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.597079 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.597191 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.596971 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.597214 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.597608 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.597751 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.597779 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.598052 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.598145 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.598358 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.598524 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.598529 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.598612 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.598652 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.598711 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.598753 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.598935 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.599194 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.599356 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.586725 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.599644 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.595497 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.600351 4877 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.600532 4877 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.601474 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:00:59.10142857 +0000 UTC m=+20.127672624 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.601581 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:00:59.101563174 +0000 UTC m=+20.127807218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.601895 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.602035 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.605052 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.605852 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.606983 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.607213 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.607595 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.608064 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.608681 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.610678 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.610984 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.612006 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.613118 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.619776 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.622017 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.622647 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.623427 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.623536 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.623616 4877 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.623752 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 18:00:59.123727327 +0000 UTC m=+20.149971381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.623729 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.623553 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.624167 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.624215 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.624231 4877 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:00:58 crc kubenswrapper[4877]: E1211 18:00:58.624306 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 18:00:59.124279572 +0000 UTC m=+20.150523606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.624506 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.625205 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.625467 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.626367 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.626799 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.627190 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.627580 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.627901 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.628287 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.628483 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.628615 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.629021 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.629319 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.629574 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.629655 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.629811 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.630217 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.630322 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.630495 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.630573 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.630767 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.630833 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.631060 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.633663 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.634120 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.634182 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.634400 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.634915 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.635087 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.635151 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.635188 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.635317 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.636213 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.639533 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.639733 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.640272 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.640359 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.640446 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.640600 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.640891 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.640702 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.640947 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.641672 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.641757 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.643160 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.644990 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.645148 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.645387 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.645751 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.648931 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.649712 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.651412 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.651538 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.651980 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.652196 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.658321 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.667770 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.670246 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.688776 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.688865 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.688933 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.688985 4877 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689000 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689015 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689029 4877 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689044 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689058 4877 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689072 4877 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689090 4877 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689105 4877 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689118 4877 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689133 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689147 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689161 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689177 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689193 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689208 4877 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689222 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689235 4877 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689247 4877 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689261 4877 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689278 4877 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689291 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689305 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689319 4877 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689332 4877 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689344 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689356 4877 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689391 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689407 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689418 4877 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689433 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689444 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689455 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689467 4877 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689479 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689490 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689501 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689513 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689530 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689544 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689564 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689577 4877 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689590 4877 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689603 4877 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689616 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689627 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689639 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689651 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689663 4877 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689678 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689690 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689702 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689714 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689726 4877 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689737 4877 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689749 4877 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689762 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689773 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689787 4877 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689799 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689812 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689825 4877 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689839 4877 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689851 4877 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689862 4877 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689875 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689886 4877 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689910 4877 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689921 4877 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689934 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689947 4877 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689959 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689972 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689983 4877 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.689994 4877 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690006 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690018 4877 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690029 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690041 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690054 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690066 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690077 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690090 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690102 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690114 4877 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690124 4877 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690137 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690148 4877 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690160 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690172 4877 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690184 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690195 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690207 4877 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690218 4877 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690229 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690241 4877 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690252 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690263 4877 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690273 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690285 4877 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690297 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690308 4877 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690321 4877 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690333 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690344 4877 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690354 4877 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690468 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690701 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690365 4877 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690731 4877 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690745 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690757 4877 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690770 4877 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690784 4877 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690795 4877 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690808 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690821 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690834 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690845 4877 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690858 4877 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690872 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690885 4877 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690898 4877 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690911 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690925 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690938 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690950 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690963 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690975 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.690989 4877 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691003 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691042 4877 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691056 4877 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691069 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691081 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691093 4877 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691107 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691120 4877 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691133 4877 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691147 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691160 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691172 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691184 4877 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691196 4877 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691208 4877 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691221 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691234 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691248 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691263 4877 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691276 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691289 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691302 4877 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691315 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691327 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691341 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691353 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691365 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691514 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691556 4877 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691570 4877 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691583 4877 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.691597 4877 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.693151 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.780869 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.791983 4877 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.793045 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.800413 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 18:00:58 crc kubenswrapper[4877]: W1211 18:00:58.801986 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-e3e334f3f77a23d3d31f21b2c5f60ad0e33012a9a1702a39db87b5e5767aa52c WatchSource:0}: Error finding container e3e334f3f77a23d3d31f21b2c5f60ad0e33012a9a1702a39db87b5e5767aa52c: Status 404 returned error can't find the container with id e3e334f3f77a23d3d31f21b2c5f60ad0e33012a9a1702a39db87b5e5767aa52c Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.953971 4877 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.959426 4877 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 18:00:58 crc kubenswrapper[4877]: I1211 18:00:58.959476 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.037195 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.041006 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.047172 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.051722 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.064512 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.075368 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.091116 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.093865 4877 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.093998 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:00:59 crc kubenswrapper[4877]: W1211 18:00:59.094017 4877 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 11 18:00:59 crc kubenswrapper[4877]: W1211 18:00:59.094060 4877 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.094087 4877 request.go:1255] Unexpected error when reading response body: read tcp 38.102.83.103:51182->38.102.83.103:6443: use of closed network connection Dec 11 18:00:59 crc kubenswrapper[4877]: W1211 18:00:59.094102 4877 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.094123 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:01:00.09410902 +0000 UTC m=+21.120353064 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.094118 4877 status_manager.go:851] "Failed to get status for pod" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" pod="openshift-network-operator/iptables-alerter-4ln5h" err="unexpected error when reading response body. Please retry. Original error: read tcp 38.102.83.103:51182->38.102.83.103:6443: use of closed network connection" Dec 11 18:00:59 crc kubenswrapper[4877]: W1211 18:00:59.094158 4877 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Dec 11 18:00:59 crc kubenswrapper[4877]: W1211 18:00:59.094156 4877 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 11 18:00:59 crc kubenswrapper[4877]: W1211 18:00:59.094201 4877 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Dec 11 18:00:59 crc kubenswrapper[4877]: W1211 18:00:59.094217 4877 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 18:00:59 crc kubenswrapper[4877]: W1211 18:00:59.094185 4877 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 11 18:00:59 crc kubenswrapper[4877]: W1211 18:00:59.094263 4877 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Dec 11 18:00:59 crc kubenswrapper[4877]: W1211 18:00:59.094292 4877 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Dec 11 18:00:59 crc kubenswrapper[4877]: W1211 18:00:59.094316 4877 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.094241 4877 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.103:51182->38.102.83.103:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18803b1ab8f01443 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 18:00:39.887164483 +0000 UTC m=+0.913408567,LastTimestamp:2025-12-11 18:00:39.887164483 +0000 UTC m=+0.913408567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 18:00:59 crc kubenswrapper[4877]: W1211 18:00:59.094334 4877 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.094353 4877 controller.go:145] "Failed to ensure lease exists, will retry" err="Post \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases?timeout=10s\": read tcp 38.102.83.103:51182->38.102.83.103:6443: use of closed network connection" interval="6.4s" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.108143 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.117867 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.130314 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.141484 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.154838 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.186941 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.194981 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.195022 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.195048 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.195093 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.195152 4877 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.195216 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.195217 4877 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.195246 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:00.195224671 +0000 UTC m=+21.221468785 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.195270 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.195280 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.195291 4877 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.195308 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:00.195288302 +0000 UTC m=+21.221532346 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.195231 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.195338 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:00.195322593 +0000 UTC m=+21.221566637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.195346 4877 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:00:59 crc kubenswrapper[4877]: E1211 18:00:59.195409 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:00.195402455 +0000 UTC m=+21.221646499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.219247 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.219820 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.220779 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.221441 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.222004 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.222517 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.223100 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.223661 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.224279 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.224801 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.225359 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.226030 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.226566 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.227092 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.227805 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.230109 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.230632 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.231824 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.232197 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.232746 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.234637 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.235169 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.235761 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.236607 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.237264 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.238075 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.238756 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.239815 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.240270 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.241330 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.241938 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.242431 4877 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.242541 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.244887 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.245510 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.246452 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.247987 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.248801 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.248954 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.249876 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.250542 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.251773 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.253261 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.254326 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.255063 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.256025 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.256502 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.257444 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.257993 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.259045 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.259536 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.260452 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.260914 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.261515 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.262436 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.262926 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.265744 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.280827 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.293884 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.311327 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.327415 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d04f1fe7a638a2364a5c1bcce1781d72f5bf4d151c59b369b23e10ba2dfd2655"} Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.328897 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425"} Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.328941 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb"} Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.328960 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e3e334f3f77a23d3d31f21b2c5f60ad0e33012a9a1702a39db87b5e5767aa52c"} Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.330596 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600"} Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.330668 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"46030a1f6c8e7b8b05611367299a72f25142c9b150d0021834e0e95de62b3c3d"} Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.332445 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.338557 4877 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b" exitCode=255 Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.338656 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b"} Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.340914 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.351482 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.352191 4877 scope.go:117] "RemoveContainer" containerID="58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.358741 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.373433 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.393090 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.420423 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.445852 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.466455 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.485122 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.500153 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.520738 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.540533 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:00:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.564272 4877 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-11 17:55:58 +0000 UTC, rotation deadline is 2026-10-15 11:55:44.24763121 +0000 UTC Dec 11 18:00:59 crc kubenswrapper[4877]: I1211 18:00:59.564361 4877 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7385h54m44.683272971s for next certificate rotation Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.089274 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.105013 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.105182 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:01:02.105159479 +0000 UTC m=+23.131403533 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.180550 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-vjskq"] Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.180845 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vjskq" Dec 11 18:01:00 crc kubenswrapper[4877]: W1211 18:01:00.185609 4877 reflector.go:561] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": failed to list *v1.Secret: secrets "node-ca-dockercfg-4777p" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.185622 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.185665 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.185671 4877 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4777p\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-ca-dockercfg-4777p\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 11 18:01:00 crc kubenswrapper[4877]: W1211 18:01:00.186169 4877 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.186192 4877 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.200211 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.203170 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.205682 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.205739 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.205762 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.205789 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.205931 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.205948 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.205960 4877 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.206019 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:02.206002451 +0000 UTC m=+23.232246495 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.206077 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.206087 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.206095 4877 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.206125 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:02.206117224 +0000 UTC m=+23.232361268 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.206190 4877 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.206303 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:02.206281669 +0000 UTC m=+23.232525773 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.206295 4877 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.206474 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:02.206444503 +0000 UTC m=+23.232688737 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.214572 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.214640 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.214582 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.214766 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.214899 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:00 crc kubenswrapper[4877]: E1211 18:01:00.215033 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.221793 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.224459 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.224980 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.239407 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.256565 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.302346 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.306725 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fcbb2f70-a54d-405a-b5f5-5857dd18b526-host\") pod \"node-ca-vjskq\" (UID: \"fcbb2f70-a54d-405a-b5f5-5857dd18b526\") " pod="openshift-image-registry/node-ca-vjskq" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.306773 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fcbb2f70-a54d-405a-b5f5-5857dd18b526-serviceca\") pod \"node-ca-vjskq\" (UID: \"fcbb2f70-a54d-405a-b5f5-5857dd18b526\") " pod="openshift-image-registry/node-ca-vjskq" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.306791 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lglq\" (UniqueName: \"kubernetes.io/projected/fcbb2f70-a54d-405a-b5f5-5857dd18b526-kube-api-access-9lglq\") pod \"node-ca-vjskq\" (UID: \"fcbb2f70-a54d-405a-b5f5-5857dd18b526\") " pod="openshift-image-registry/node-ca-vjskq" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.343448 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.345492 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f"} Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.345816 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.351939 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.372711 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.373929 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.400322 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.407900 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fcbb2f70-a54d-405a-b5f5-5857dd18b526-serviceca\") pod \"node-ca-vjskq\" (UID: \"fcbb2f70-a54d-405a-b5f5-5857dd18b526\") " pod="openshift-image-registry/node-ca-vjskq" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.407930 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lglq\" (UniqueName: \"kubernetes.io/projected/fcbb2f70-a54d-405a-b5f5-5857dd18b526-kube-api-access-9lglq\") pod \"node-ca-vjskq\" (UID: \"fcbb2f70-a54d-405a-b5f5-5857dd18b526\") " pod="openshift-image-registry/node-ca-vjskq" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.407966 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fcbb2f70-a54d-405a-b5f5-5857dd18b526-host\") pod \"node-ca-vjskq\" (UID: \"fcbb2f70-a54d-405a-b5f5-5857dd18b526\") " pod="openshift-image-registry/node-ca-vjskq" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.408011 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fcbb2f70-a54d-405a-b5f5-5857dd18b526-host\") pod \"node-ca-vjskq\" (UID: \"fcbb2f70-a54d-405a-b5f5-5857dd18b526\") " pod="openshift-image-registry/node-ca-vjskq" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.409294 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fcbb2f70-a54d-405a-b5f5-5857dd18b526-serviceca\") pod \"node-ca-vjskq\" (UID: \"fcbb2f70-a54d-405a-b5f5-5857dd18b526\") " pod="openshift-image-registry/node-ca-vjskq" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.428518 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.429806 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.442994 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.446076 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.459460 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.473010 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.488141 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.502911 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.517074 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.518670 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.528439 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.542899 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.555870 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.570627 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.604491 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-665tk"] Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.605496 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-sjnxr"] Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.605677 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.605973 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.607263 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.607284 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.607365 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.607707 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gwfnt"] Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.607988 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.608282 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qvb5p"] Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.608589 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.608752 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.608847 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.608866 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.609048 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.609074 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.609141 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.609700 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.612333 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-dtgjg"] Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.612574 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dtgjg" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.613131 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.615655 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.615944 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.618552 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.619840 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.619867 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.619841 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.619951 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.619940 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.620461 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.621037 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.621073 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.633324 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.645840 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.658932 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.669069 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.686183 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.705002 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710356 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-run-ovn-kubernetes\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710405 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-multus-cni-dir\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710424 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-run-multus-certs\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710440 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-ovn\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710463 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-env-overrides\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710477 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-system-cni-dir\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710492 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spvfm\" (UniqueName: \"kubernetes.io/projected/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-kube-api-access-spvfm\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710507 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chx59\" (UniqueName: \"kubernetes.io/projected/47cbee6c-de7f-4f75-8a7b-6d4e7da6f963-kube-api-access-chx59\") pod \"machine-config-daemon-sjnxr\" (UID: \"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\") " pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710536 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-var-lib-openvswitch\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710551 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-openvswitch\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710566 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710582 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-ovnkube-script-lib\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710598 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-run-k8s-cni-cncf-io\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710613 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-run-netns\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710628 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-systemd-units\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710642 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-run-netns\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710657 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-ovnkube-config\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710672 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvnsg\" (UniqueName: \"kubernetes.io/projected/ea4114b7-a44c-4220-a321-9f18bbb90151-kube-api-access-dvnsg\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710689 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d0c29e17-9aad-46b1-bbff-eb00cc938537-cni-binary-copy\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710706 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/47cbee6c-de7f-4f75-8a7b-6d4e7da6f963-proxy-tls\") pod \"machine-config-daemon-sjnxr\" (UID: \"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\") " pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710723 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d0c29e17-9aad-46b1-bbff-eb00cc938537-tuning-conf-dir\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710737 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-multus-daemon-config\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710750 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-cni-bin\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710782 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-cni-netd\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710855 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d0c29e17-9aad-46b1-bbff-eb00cc938537-os-release\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710890 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-var-lib-cni-bin\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710917 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d0c29e17-9aad-46b1-bbff-eb00cc938537-system-cni-dir\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.710979 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-kubelet\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711009 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-log-socket\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711030 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88rfm\" (UniqueName: \"kubernetes.io/projected/d0c29e17-9aad-46b1-bbff-eb00cc938537-kube-api-access-88rfm\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711149 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-hostroot\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711239 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-slash\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711271 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-cnibin\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711302 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-var-lib-kubelet\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711332 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/47cbee6c-de7f-4f75-8a7b-6d4e7da6f963-rootfs\") pod \"machine-config-daemon-sjnxr\" (UID: \"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\") " pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711363 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/47cbee6c-de7f-4f75-8a7b-6d4e7da6f963-mcd-auth-proxy-config\") pod \"machine-config-daemon-sjnxr\" (UID: \"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\") " pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711446 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d0c29e17-9aad-46b1-bbff-eb00cc938537-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711475 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-os-release\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711504 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-var-lib-cni-multus\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711533 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-multus-conf-dir\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711560 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-etc-kubernetes\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711620 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea4114b7-a44c-4220-a321-9f18bbb90151-ovn-node-metrics-cert\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711648 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-systemd\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711696 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-etc-openvswitch\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711724 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-node-log\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711754 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d0c29e17-9aad-46b1-bbff-eb00cc938537-cnibin\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711793 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-cni-binary-copy\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711830 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c74b9aa5-bb2b-4d63-9ce6-ea21336f0741-hosts-file\") pod \"node-resolver-dtgjg\" (UID: \"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\") " pod="openshift-dns/node-resolver-dtgjg" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711860 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc8w4\" (UniqueName: \"kubernetes.io/projected/c74b9aa5-bb2b-4d63-9ce6-ea21336f0741-kube-api-access-nc8w4\") pod \"node-resolver-dtgjg\" (UID: \"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\") " pod="openshift-dns/node-resolver-dtgjg" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.711914 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-multus-socket-dir-parent\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.731936 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.748128 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.759755 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.773484 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.787423 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.811535 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.812762 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-slash\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.812809 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-cnibin\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.812853 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-var-lib-kubelet\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.812873 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/47cbee6c-de7f-4f75-8a7b-6d4e7da6f963-rootfs\") pod \"machine-config-daemon-sjnxr\" (UID: \"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\") " pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.812892 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/47cbee6c-de7f-4f75-8a7b-6d4e7da6f963-mcd-auth-proxy-config\") pod \"machine-config-daemon-sjnxr\" (UID: \"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\") " pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.812937 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/47cbee6c-de7f-4f75-8a7b-6d4e7da6f963-rootfs\") pod \"machine-config-daemon-sjnxr\" (UID: \"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\") " pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.812901 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-slash\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813003 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea4114b7-a44c-4220-a321-9f18bbb90151-ovn-node-metrics-cert\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813027 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d0c29e17-9aad-46b1-bbff-eb00cc938537-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813019 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-var-lib-kubelet\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813057 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-cnibin\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813100 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-os-release\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813120 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-var-lib-cni-multus\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813164 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-multus-conf-dir\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813182 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-etc-kubernetes\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813218 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-var-lib-cni-multus\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813255 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-multus-conf-dir\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813295 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-systemd\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813333 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-etc-kubernetes\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813350 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-etc-openvswitch\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813371 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-systemd\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813401 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-os-release\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813444 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-node-log\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813416 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-node-log\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813402 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-etc-openvswitch\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813507 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d0c29e17-9aad-46b1-bbff-eb00cc938537-cnibin\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813548 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d0c29e17-9aad-46b1-bbff-eb00cc938537-cnibin\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813563 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-cni-binary-copy\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813590 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c74b9aa5-bb2b-4d63-9ce6-ea21336f0741-hosts-file\") pod \"node-resolver-dtgjg\" (UID: \"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\") " pod="openshift-dns/node-resolver-dtgjg" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813608 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc8w4\" (UniqueName: \"kubernetes.io/projected/c74b9aa5-bb2b-4d63-9ce6-ea21336f0741-kube-api-access-nc8w4\") pod \"node-resolver-dtgjg\" (UID: \"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\") " pod="openshift-dns/node-resolver-dtgjg" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813625 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-multus-socket-dir-parent\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813643 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-run-ovn-kubernetes\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813653 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c74b9aa5-bb2b-4d63-9ce6-ea21336f0741-hosts-file\") pod \"node-resolver-dtgjg\" (UID: \"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\") " pod="openshift-dns/node-resolver-dtgjg" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813657 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-multus-cni-dir\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813694 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-run-multus-certs\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813718 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-ovn\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813717 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-run-ovn-kubernetes\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813733 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-env-overrides\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813750 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-system-cni-dir\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813751 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-multus-socket-dir-parent\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813764 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spvfm\" (UniqueName: \"kubernetes.io/projected/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-kube-api-access-spvfm\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813761 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-multus-cni-dir\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813783 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chx59\" (UniqueName: \"kubernetes.io/projected/47cbee6c-de7f-4f75-8a7b-6d4e7da6f963-kube-api-access-chx59\") pod \"machine-config-daemon-sjnxr\" (UID: \"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\") " pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813760 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-ovn\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813773 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-run-multus-certs\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813828 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-system-cni-dir\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.813950 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-run-netns\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814010 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-var-lib-openvswitch\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814023 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-run-netns\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814029 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-openvswitch\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814052 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-var-lib-openvswitch\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814054 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-openvswitch\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814065 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814083 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d0c29e17-9aad-46b1-bbff-eb00cc938537-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814091 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-ovnkube-script-lib\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814029 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/47cbee6c-de7f-4f75-8a7b-6d4e7da6f963-mcd-auth-proxy-config\") pod \"machine-config-daemon-sjnxr\" (UID: \"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\") " pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814141 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814175 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-run-k8s-cni-cncf-io\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814212 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-run-netns\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814233 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-systemd-units\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814259 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-systemd-units\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814266 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-ovnkube-config\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814278 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-run-netns\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814287 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvnsg\" (UniqueName: \"kubernetes.io/projected/ea4114b7-a44c-4220-a321-9f18bbb90151-kube-api-access-dvnsg\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814316 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d0c29e17-9aad-46b1-bbff-eb00cc938537-cni-binary-copy\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814337 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-multus-daemon-config\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814586 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-cni-binary-copy\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814641 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-run-k8s-cni-cncf-io\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814857 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-ovnkube-script-lib\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814885 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-ovnkube-config\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814894 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d0c29e17-9aad-46b1-bbff-eb00cc938537-cni-binary-copy\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.814899 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/47cbee6c-de7f-4f75-8a7b-6d4e7da6f963-proxy-tls\") pod \"machine-config-daemon-sjnxr\" (UID: \"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\") " pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815080 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d0c29e17-9aad-46b1-bbff-eb00cc938537-tuning-conf-dir\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815126 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-cni-bin\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815162 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-cni-netd\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815183 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-multus-daemon-config\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815201 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-cni-bin\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815231 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d0c29e17-9aad-46b1-bbff-eb00cc938537-os-release\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815254 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-var-lib-cni-bin\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815279 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d0c29e17-9aad-46b1-bbff-eb00cc938537-system-cni-dir\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815255 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-cni-netd\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815348 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-kubelet\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815404 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-host-var-lib-cni-bin\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815433 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-log-socket\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815456 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88rfm\" (UniqueName: \"kubernetes.io/projected/d0c29e17-9aad-46b1-bbff-eb00cc938537-kube-api-access-88rfm\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815483 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-hostroot\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815563 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d0c29e17-9aad-46b1-bbff-eb00cc938537-tuning-conf-dir\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815597 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-kubelet\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815661 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-hostroot\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815661 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d0c29e17-9aad-46b1-bbff-eb00cc938537-system-cni-dir\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815701 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d0c29e17-9aad-46b1-bbff-eb00cc938537-os-release\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815706 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-log-socket\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.815753 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-env-overrides\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.819956 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea4114b7-a44c-4220-a321-9f18bbb90151-ovn-node-metrics-cert\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.830767 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/47cbee6c-de7f-4f75-8a7b-6d4e7da6f963-proxy-tls\") pod \"machine-config-daemon-sjnxr\" (UID: \"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\") " pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.838479 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spvfm\" (UniqueName: \"kubernetes.io/projected/61afe7d0-ec5b-41aa-a8fb-6628b863a59c-kube-api-access-spvfm\") pod \"multus-gwfnt\" (UID: \"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\") " pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.841956 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvnsg\" (UniqueName: \"kubernetes.io/projected/ea4114b7-a44c-4220-a321-9f18bbb90151-kube-api-access-dvnsg\") pod \"ovnkube-node-qvb5p\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.844062 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88rfm\" (UniqueName: \"kubernetes.io/projected/d0c29e17-9aad-46b1-bbff-eb00cc938537-kube-api-access-88rfm\") pod \"multus-additional-cni-plugins-665tk\" (UID: \"d0c29e17-9aad-46b1-bbff-eb00cc938537\") " pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.845791 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.846712 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc8w4\" (UniqueName: \"kubernetes.io/projected/c74b9aa5-bb2b-4d63-9ce6-ea21336f0741-kube-api-access-nc8w4\") pod \"node-resolver-dtgjg\" (UID: \"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\") " pod="openshift-dns/node-resolver-dtgjg" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.849475 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chx59\" (UniqueName: \"kubernetes.io/projected/47cbee6c-de7f-4f75-8a7b-6d4e7da6f963-kube-api-access-chx59\") pod \"machine-config-daemon-sjnxr\" (UID: \"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\") " pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.867589 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.884656 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.899233 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.916914 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.918225 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.919306 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-665tk" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.928648 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.930510 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.931124 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: W1211 18:01:00.932022 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0c29e17_9aad_46b1_bbff_eb00cc938537.slice/crio-eeb5f6adc25e769a12a81eecc9cf61042ace4358e5287b43f42d8ba2cb3a9bc4 WatchSource:0}: Error finding container eeb5f6adc25e769a12a81eecc9cf61042ace4358e5287b43f42d8ba2cb3a9bc4: Status 404 returned error can't find the container with id eeb5f6adc25e769a12a81eecc9cf61042ace4358e5287b43f42d8ba2cb3a9bc4 Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.933109 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.935180 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gwfnt" Dec 11 18:01:00 crc kubenswrapper[4877]: W1211 18:01:00.940549 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47cbee6c_de7f_4f75_8a7b_6d4e7da6f963.slice/crio-9d0c524549ca03773afaea394b7ec1ba10fc4b1d35cc9a9ced263976eb55e4c6 WatchSource:0}: Error finding container 9d0c524549ca03773afaea394b7ec1ba10fc4b1d35cc9a9ced263976eb55e4c6: Status 404 returned error can't find the container with id 9d0c524549ca03773afaea394b7ec1ba10fc4b1d35cc9a9ced263976eb55e4c6 Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.942651 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.947113 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: W1211 18:01:00.947847 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61afe7d0_ec5b_41aa_a8fb_6628b863a59c.slice/crio-5de4f1487629e23bf110d58c04e7c20035d33000b65ce92b98da1fadf591f303 WatchSource:0}: Error finding container 5de4f1487629e23bf110d58c04e7c20035d33000b65ce92b98da1fadf591f303: Status 404 returned error can't find the container with id 5de4f1487629e23bf110d58c04e7c20035d33000b65ce92b98da1fadf591f303 Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.949616 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dtgjg" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.962667 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.975502 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:00 crc kubenswrapper[4877]: W1211 18:01:00.983215 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc74b9aa5_bb2b_4d63_9ce6_ea21336f0741.slice/crio-221fdf3ab55145931650852fc89caa016f75e801a9db097d0d6de52dcb44ea64 WatchSource:0}: Error finding container 221fdf3ab55145931650852fc89caa016f75e801a9db097d0d6de52dcb44ea64: Status 404 returned error can't find the container with id 221fdf3ab55145931650852fc89caa016f75e801a9db097d0d6de52dcb44ea64 Dec 11 18:01:00 crc kubenswrapper[4877]: W1211 18:01:00.984044 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea4114b7_a44c_4220_a321_9f18bbb90151.slice/crio-879cba78f4464cca9871cfed9888654595bf2d321b37cea1f337481c25bfa93f WatchSource:0}: Error finding container 879cba78f4464cca9871cfed9888654595bf2d321b37cea1f337481c25bfa93f: Status 404 returned error can't find the container with id 879cba78f4464cca9871cfed9888654595bf2d321b37cea1f337481c25bfa93f Dec 11 18:01:00 crc kubenswrapper[4877]: I1211 18:01:00.988633 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:00Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.008853 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.019879 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.032846 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.046291 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.059067 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.084838 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.115892 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.128145 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.142053 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lglq\" (UniqueName: \"kubernetes.io/projected/fcbb2f70-a54d-405a-b5f5-5857dd18b526-kube-api-access-9lglq\") pod \"node-ca-vjskq\" (UID: \"fcbb2f70-a54d-405a-b5f5-5857dd18b526\") " pod="openshift-image-registry/node-ca-vjskq" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.166576 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.194158 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.220515 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.238583 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.252743 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.266051 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.277870 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.282803 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.301882 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.315498 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.346590 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.349909 4877 generic.go:334] "Generic (PLEG): container finished" podID="d0c29e17-9aad-46b1-bbff-eb00cc938537" containerID="510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f" exitCode=0 Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.350030 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" event={"ID":"d0c29e17-9aad-46b1-bbff-eb00cc938537","Type":"ContainerDied","Data":"510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f"} Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.350085 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" event={"ID":"d0c29e17-9aad-46b1-bbff-eb00cc938537","Type":"ContainerStarted","Data":"eeb5f6adc25e769a12a81eecc9cf61042ace4358e5287b43f42d8ba2cb3a9bc4"} Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.351650 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dtgjg" event={"ID":"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741","Type":"ContainerStarted","Data":"163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d"} Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.351720 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dtgjg" event={"ID":"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741","Type":"ContainerStarted","Data":"221fdf3ab55145931650852fc89caa016f75e801a9db097d0d6de52dcb44ea64"} Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.352893 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwfnt" event={"ID":"61afe7d0-ec5b-41aa-a8fb-6628b863a59c","Type":"ContainerStarted","Data":"e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064"} Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.352927 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwfnt" event={"ID":"61afe7d0-ec5b-41aa-a8fb-6628b863a59c","Type":"ContainerStarted","Data":"5de4f1487629e23bf110d58c04e7c20035d33000b65ce92b98da1fadf591f303"} Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.357198 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a"} Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.358213 4877 generic.go:334] "Generic (PLEG): container finished" podID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerID="1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90" exitCode=0 Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.358285 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerDied","Data":"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90"} Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.358323 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerStarted","Data":"879cba78f4464cca9871cfed9888654595bf2d321b37cea1f337481c25bfa93f"} Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.359755 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51"} Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.359827 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a"} Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.359843 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"9d0c524549ca03773afaea394b7ec1ba10fc4b1d35cc9a9ced263976eb55e4c6"} Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.388231 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.395486 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vjskq" Dec 11 18:01:01 crc kubenswrapper[4877]: W1211 18:01:01.420991 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcbb2f70_a54d_405a_b5f5_5857dd18b526.slice/crio-05cc7fa16ea658a3f7dfdb4ff2627672958f68b201925a4dcaf1b67473a4dcaa WatchSource:0}: Error finding container 05cc7fa16ea658a3f7dfdb4ff2627672958f68b201925a4dcaf1b67473a4dcaa: Status 404 returned error can't find the container with id 05cc7fa16ea658a3f7dfdb4ff2627672958f68b201925a4dcaf1b67473a4dcaa Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.447995 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.469201 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.507086 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.553780 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.590772 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.631558 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.667356 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.711069 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.755519 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.787445 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.827659 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.867439 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.909024 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.945769 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:01 crc kubenswrapper[4877]: I1211 18:01:01.987594 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:01Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.028350 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.071069 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.103252 4877 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.105916 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.105970 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.105982 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.106160 4877 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.107325 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.129722 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.129904 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:01:06.129877778 +0000 UTC m=+27.156121842 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.160342 4877 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.160600 4877 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.161820 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.161857 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.161870 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.161887 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.161900 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:02Z","lastTransitionTime":"2025-12-11T18:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.179804 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.184885 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.184915 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.184925 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.184938 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.184948 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:02Z","lastTransitionTime":"2025-12-11T18:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.188493 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.196230 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.199813 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.199839 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.199857 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.199870 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.199878 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:02Z","lastTransitionTime":"2025-12-11T18:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.210340 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.213862 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.213890 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.213900 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.213916 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.213926 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:02Z","lastTransitionTime":"2025-12-11T18:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.215217 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.215288 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.215306 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.215419 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.215643 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.215773 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.230966 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.231030 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.231067 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.231093 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.231108 4877 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.231048 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.231218 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:06.231188783 +0000 UTC m=+27.257432867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.231229 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.231249 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.231261 4877 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.231310 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:06.231292206 +0000 UTC m=+27.257536350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.231315 4877 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.231436 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:06.23141397 +0000 UTC m=+27.257658014 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.231559 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.231573 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.231607 4877 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.231652 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:06.231642836 +0000 UTC m=+27.257887010 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.233067 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.234987 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.235019 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.235032 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.235047 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.235057 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:02Z","lastTransitionTime":"2025-12-11T18:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.251624 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: E1211 18:01:02.251769 4877 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.253720 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.253765 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.253778 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.253798 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.253811 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:02Z","lastTransitionTime":"2025-12-11T18:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.274828 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.314527 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.351492 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.356423 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.356475 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.356487 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.356510 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.356523 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:02Z","lastTransitionTime":"2025-12-11T18:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.365514 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerStarted","Data":"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9"} Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.365565 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerStarted","Data":"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2"} Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.366963 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vjskq" event={"ID":"fcbb2f70-a54d-405a-b5f5-5857dd18b526","Type":"ContainerStarted","Data":"05cc7fa16ea658a3f7dfdb4ff2627672958f68b201925a4dcaf1b67473a4dcaa"} Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.369968 4877 generic.go:334] "Generic (PLEG): container finished" podID="d0c29e17-9aad-46b1-bbff-eb00cc938537" containerID="7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be" exitCode=0 Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.370468 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" event={"ID":"d0c29e17-9aad-46b1-bbff-eb00cc938537","Type":"ContainerDied","Data":"7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be"} Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.394718 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.435116 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.462236 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.462263 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.462271 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.462283 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.462292 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:02Z","lastTransitionTime":"2025-12-11T18:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.469652 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.509026 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.556856 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.565584 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.565612 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.565624 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.565639 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.565651 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:02Z","lastTransitionTime":"2025-12-11T18:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.587628 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.625965 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.668567 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.668616 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.668628 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.668645 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.668656 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:02Z","lastTransitionTime":"2025-12-11T18:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.669318 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.708147 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.746950 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.771023 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.771057 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.771068 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.771084 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.771120 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:02Z","lastTransitionTime":"2025-12-11T18:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.788577 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.827743 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.870548 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.873104 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.873135 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.873146 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.873162 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.873176 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:02Z","lastTransitionTime":"2025-12-11T18:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.918134 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.956495 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.975008 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.975252 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.975318 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.975407 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.975480 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:02Z","lastTransitionTime":"2025-12-11T18:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:02 crc kubenswrapper[4877]: I1211 18:01:02.989501 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:02Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.029175 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.067046 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.080183 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.080215 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.080227 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.080248 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.080260 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:03Z","lastTransitionTime":"2025-12-11T18:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.109844 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.157311 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.182848 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.182884 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.182892 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.182906 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.182916 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:03Z","lastTransitionTime":"2025-12-11T18:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.197306 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.285770 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.286522 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.286550 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.286576 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.286587 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:03Z","lastTransitionTime":"2025-12-11T18:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.377365 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerStarted","Data":"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.377438 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerStarted","Data":"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.377451 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerStarted","Data":"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.377461 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerStarted","Data":"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.378976 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vjskq" event={"ID":"fcbb2f70-a54d-405a-b5f5-5857dd18b526","Type":"ContainerStarted","Data":"07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.382094 4877 generic.go:334] "Generic (PLEG): container finished" podID="d0c29e17-9aad-46b1-bbff-eb00cc938537" containerID="791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0" exitCode=0 Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.382124 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" event={"ID":"d0c29e17-9aad-46b1-bbff-eb00cc938537","Type":"ContainerDied","Data":"791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.389215 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.389246 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.389257 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.389271 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.389281 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:03Z","lastTransitionTime":"2025-12-11T18:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.402249 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.414368 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.427671 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.440244 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.453323 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.463390 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.476234 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.491898 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.491945 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.491957 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.491977 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.491989 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:03Z","lastTransitionTime":"2025-12-11T18:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.515443 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.557110 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.590149 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.597747 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.597805 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.597818 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.597835 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.597846 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:03Z","lastTransitionTime":"2025-12-11T18:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.629670 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.671127 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.699547 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.699577 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.699588 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.699600 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.699611 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:03Z","lastTransitionTime":"2025-12-11T18:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.708203 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.747318 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.793008 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.802060 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.802101 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.802112 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.802126 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.802136 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:03Z","lastTransitionTime":"2025-12-11T18:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.831606 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.871195 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.905249 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.905286 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.905296 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.905311 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.905320 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:03Z","lastTransitionTime":"2025-12-11T18:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.913541 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.949558 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:03 crc kubenswrapper[4877]: I1211 18:01:03.997192 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:03Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.007726 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.007755 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.007765 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.007781 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.007791 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:04Z","lastTransitionTime":"2025-12-11T18:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.040504 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.070216 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.110931 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.111000 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.111018 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.111042 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.111059 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:04Z","lastTransitionTime":"2025-12-11T18:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.113784 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.157905 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.189298 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.213945 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.214017 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.214036 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.214067 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.214087 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:04Z","lastTransitionTime":"2025-12-11T18:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.214477 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.214585 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:04 crc kubenswrapper[4877]: E1211 18:01:04.214751 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.214828 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:04 crc kubenswrapper[4877]: E1211 18:01:04.214940 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:04 crc kubenswrapper[4877]: E1211 18:01:04.215107 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.230172 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.270055 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.311267 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.317504 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.317535 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.317547 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.317562 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.317572 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:04Z","lastTransitionTime":"2025-12-11T18:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.350518 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.389094 4877 generic.go:334] "Generic (PLEG): container finished" podID="d0c29e17-9aad-46b1-bbff-eb00cc938537" containerID="9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263" exitCode=0 Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.389157 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" event={"ID":"d0c29e17-9aad-46b1-bbff-eb00cc938537","Type":"ContainerDied","Data":"9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263"} Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.395362 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.421144 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.421175 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.421184 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.421198 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.421208 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:04Z","lastTransitionTime":"2025-12-11T18:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.434476 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.470223 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.511229 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.523526 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.523567 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.523582 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.523606 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.523620 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:04Z","lastTransitionTime":"2025-12-11T18:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.556431 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.586751 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.626967 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.627020 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.627038 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.627063 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.627118 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:04Z","lastTransitionTime":"2025-12-11T18:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.627511 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.672010 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.709505 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.730353 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.730431 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.730450 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.730476 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.730493 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:04Z","lastTransitionTime":"2025-12-11T18:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.747848 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.842158 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.842204 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.842215 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.842231 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.842246 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:04Z","lastTransitionTime":"2025-12-11T18:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.844814 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.862126 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.876541 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.907449 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.945165 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.945366 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.945446 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.945528 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.945590 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:04Z","lastTransitionTime":"2025-12-11T18:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.957745 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:04 crc kubenswrapper[4877]: I1211 18:01:04.993228 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.047658 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.047875 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.047994 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.048065 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.048121 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:05Z","lastTransitionTime":"2025-12-11T18:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.151273 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.151344 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.151353 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.151413 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.151442 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:05Z","lastTransitionTime":"2025-12-11T18:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.254171 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.254220 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.254232 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.254249 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.254262 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:05Z","lastTransitionTime":"2025-12-11T18:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.362310 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.362360 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.362387 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.362405 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.362417 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:05Z","lastTransitionTime":"2025-12-11T18:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.397823 4877 generic.go:334] "Generic (PLEG): container finished" podID="d0c29e17-9aad-46b1-bbff-eb00cc938537" containerID="ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3" exitCode=0 Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.397906 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" event={"ID":"d0c29e17-9aad-46b1-bbff-eb00cc938537","Type":"ContainerDied","Data":"ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3"} Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.405523 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerStarted","Data":"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d"} Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.425166 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.453849 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.465652 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.465698 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.465709 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.465728 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.465740 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:05Z","lastTransitionTime":"2025-12-11T18:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.475058 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.493059 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.508082 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.522779 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.539341 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.554287 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.567554 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.568888 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.568927 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.568938 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.568963 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.568980 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:05Z","lastTransitionTime":"2025-12-11T18:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.588649 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.604050 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.615858 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.631037 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.646108 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.664889 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:05Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.673273 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.673321 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.673331 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.673349 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.673360 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:05Z","lastTransitionTime":"2025-12-11T18:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.777043 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.777100 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.777110 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.777133 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.777145 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:05Z","lastTransitionTime":"2025-12-11T18:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.885516 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.885592 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.885617 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.885654 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.885678 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:05Z","lastTransitionTime":"2025-12-11T18:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.988131 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.988188 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.988203 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.988233 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:05 crc kubenswrapper[4877]: I1211 18:01:05.988253 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:05Z","lastTransitionTime":"2025-12-11T18:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.091230 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.091276 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.091287 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.091307 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.091318 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:06Z","lastTransitionTime":"2025-12-11T18:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.176788 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.177059 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:01:14.177022134 +0000 UTC m=+35.203266218 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.194232 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.194272 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.194281 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.194295 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.194304 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:06Z","lastTransitionTime":"2025-12-11T18:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.214937 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.215060 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.214943 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.215135 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.214937 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.215884 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.278945 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.278993 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.279026 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.279058 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.279100 4877 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.279183 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.279203 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.279203 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.279223 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:14.279196153 +0000 UTC m=+35.305440227 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.279239 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.279241 4877 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.279329 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:14.279306646 +0000 UTC m=+35.305550720 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.279260 4877 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.279402 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:14.279392878 +0000 UTC m=+35.305636922 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.279214 4877 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:06 crc kubenswrapper[4877]: E1211 18:01:06.279502 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:14.27946992 +0000 UTC m=+35.305714004 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.297277 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.297351 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.297408 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.297438 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.297456 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:06Z","lastTransitionTime":"2025-12-11T18:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.399937 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.399993 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.400012 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.400039 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.400059 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:06Z","lastTransitionTime":"2025-12-11T18:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.415251 4877 generic.go:334] "Generic (PLEG): container finished" podID="d0c29e17-9aad-46b1-bbff-eb00cc938537" containerID="942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2" exitCode=0 Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.415303 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" event={"ID":"d0c29e17-9aad-46b1-bbff-eb00cc938537","Type":"ContainerDied","Data":"942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2"} Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.435326 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.451783 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.467458 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.491764 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.503210 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.503246 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.503259 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.503279 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.503291 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:06Z","lastTransitionTime":"2025-12-11T18:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.511614 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.534323 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.556338 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.574767 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.591038 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.596808 4877 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.604111 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.605368 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.605421 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.605430 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.605446 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.605456 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:06Z","lastTransitionTime":"2025-12-11T18:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.640347 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.669070 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.688158 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.702664 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.707863 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.707936 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.707955 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.708037 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.708058 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:06Z","lastTransitionTime":"2025-12-11T18:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.714749 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:06Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.810549 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.810615 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.810633 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.810658 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.810675 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:06Z","lastTransitionTime":"2025-12-11T18:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.868875 4877 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.913831 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.913892 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.913915 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.913943 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:06 crc kubenswrapper[4877]: I1211 18:01:06.913965 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:06Z","lastTransitionTime":"2025-12-11T18:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.017657 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.018094 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.018109 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.018133 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.018146 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:07Z","lastTransitionTime":"2025-12-11T18:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.124741 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.124803 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.124818 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.124842 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.124857 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:07Z","lastTransitionTime":"2025-12-11T18:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.227799 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.227844 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.227854 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.227870 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.227883 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:07Z","lastTransitionTime":"2025-12-11T18:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.330625 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.330692 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.330706 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.330728 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.330741 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:07Z","lastTransitionTime":"2025-12-11T18:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.434860 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.435569 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.435620 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.435650 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.435667 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:07Z","lastTransitionTime":"2025-12-11T18:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.538579 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.538641 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.538653 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.538676 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.538689 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:07Z","lastTransitionTime":"2025-12-11T18:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.642251 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.642300 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.642317 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.642341 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.642358 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:07Z","lastTransitionTime":"2025-12-11T18:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.745233 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.745302 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.745321 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.745345 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.745400 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:07Z","lastTransitionTime":"2025-12-11T18:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.848672 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.848710 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.848724 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.848742 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.848754 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:07Z","lastTransitionTime":"2025-12-11T18:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.951472 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.951504 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.951512 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.951526 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:07 crc kubenswrapper[4877]: I1211 18:01:07.951535 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:07Z","lastTransitionTime":"2025-12-11T18:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.054692 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.054731 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.054741 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.054758 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.054768 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:08Z","lastTransitionTime":"2025-12-11T18:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.157604 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.157663 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.157675 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.157697 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.157713 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:08Z","lastTransitionTime":"2025-12-11T18:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.214892 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.214998 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.214888 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:08 crc kubenswrapper[4877]: E1211 18:01:08.215786 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:08 crc kubenswrapper[4877]: E1211 18:01:08.215880 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:08 crc kubenswrapper[4877]: E1211 18:01:08.215694 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.260674 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.260747 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.260766 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.260798 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.260818 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:08Z","lastTransitionTime":"2025-12-11T18:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.363513 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.363584 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.363622 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.363646 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.363658 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:08Z","lastTransitionTime":"2025-12-11T18:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.427655 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" event={"ID":"d0c29e17-9aad-46b1-bbff-eb00cc938537","Type":"ContainerStarted","Data":"fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26"} Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.432455 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerStarted","Data":"0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc"} Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.436617 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.436708 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.456079 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.466692 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.466736 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.466746 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.466767 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.466779 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:08Z","lastTransitionTime":"2025-12-11T18:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.474596 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.475336 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.475329 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.497092 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.523869 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.540494 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.555663 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.569478 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.569591 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.569606 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.569632 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.569652 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:08Z","lastTransitionTime":"2025-12-11T18:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.571837 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.630521 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.645412 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.661109 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.672421 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.672634 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.672753 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.672855 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.672955 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:08Z","lastTransitionTime":"2025-12-11T18:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.683893 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.696959 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.710427 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.724537 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.738654 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.754138 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.769215 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.775698 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.775731 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.775741 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.775755 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.775766 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:08Z","lastTransitionTime":"2025-12-11T18:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.781070 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.811506 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.835710 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.856620 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.872935 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.878287 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.878574 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.878737 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.878921 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.879098 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:08Z","lastTransitionTime":"2025-12-11T18:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.886462 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.899496 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.911653 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.924582 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.938689 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.953895 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.967937 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.979679 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:08Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.981208 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.981244 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.981254 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.981270 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:08 crc kubenswrapper[4877]: I1211 18:01:08.981282 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:08Z","lastTransitionTime":"2025-12-11T18:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.083952 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.084214 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.084343 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.084484 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.084597 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:09Z","lastTransitionTime":"2025-12-11T18:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.192063 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.192107 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.192120 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.192142 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.192153 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:09Z","lastTransitionTime":"2025-12-11T18:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.250604 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.266734 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.300068 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.302052 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.302104 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.302120 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.302140 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.302156 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:09Z","lastTransitionTime":"2025-12-11T18:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.320448 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.339327 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.354757 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.372294 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.403866 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.404911 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.405089 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.405246 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.405586 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.405753 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:09Z","lastTransitionTime":"2025-12-11T18:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.428168 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.435960 4877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.448184 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.464520 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.482708 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.500408 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.510114 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.510441 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.510543 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.510671 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.510774 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:09Z","lastTransitionTime":"2025-12-11T18:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.516339 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.529149 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:09Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.614520 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.614573 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.614584 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.614606 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.614619 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:09Z","lastTransitionTime":"2025-12-11T18:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.717431 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.717479 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.717492 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.717514 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.717528 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:09Z","lastTransitionTime":"2025-12-11T18:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.824665 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.824705 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.824719 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.824737 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.824751 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:09Z","lastTransitionTime":"2025-12-11T18:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.929408 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.929462 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.929480 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.929508 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:09 crc kubenswrapper[4877]: I1211 18:01:09.929525 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:09Z","lastTransitionTime":"2025-12-11T18:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.031760 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.031815 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.031827 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.031848 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.031862 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:10Z","lastTransitionTime":"2025-12-11T18:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.134396 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.134490 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.134502 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.134525 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.134539 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:10Z","lastTransitionTime":"2025-12-11T18:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.214634 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:10 crc kubenswrapper[4877]: E1211 18:01:10.214842 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.215116 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.215143 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:10 crc kubenswrapper[4877]: E1211 18:01:10.215337 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:10 crc kubenswrapper[4877]: E1211 18:01:10.215483 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.238002 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.238051 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.238061 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.238077 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.238088 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:10Z","lastTransitionTime":"2025-12-11T18:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.341463 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.341569 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.341595 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.341632 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.341658 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:10Z","lastTransitionTime":"2025-12-11T18:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.440626 4877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.444890 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.444933 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.444945 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.444966 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.444979 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:10Z","lastTransitionTime":"2025-12-11T18:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.529436 4877 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.548659 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.548735 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.548757 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.548789 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.548816 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:10Z","lastTransitionTime":"2025-12-11T18:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.652817 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.652891 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.652904 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.652929 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.652944 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:10Z","lastTransitionTime":"2025-12-11T18:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.755535 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.755583 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.755595 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.755612 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.755624 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:10Z","lastTransitionTime":"2025-12-11T18:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.858512 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.858567 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.858576 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.858591 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.858600 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:10Z","lastTransitionTime":"2025-12-11T18:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.960531 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.960592 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.960601 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.960615 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:10 crc kubenswrapper[4877]: I1211 18:01:10.960625 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:10Z","lastTransitionTime":"2025-12-11T18:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.064262 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.064335 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.064352 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.064407 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.064426 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:11Z","lastTransitionTime":"2025-12-11T18:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.166981 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.167155 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.167189 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.167257 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.167280 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:11Z","lastTransitionTime":"2025-12-11T18:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.270313 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.270367 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.270421 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.270444 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.270463 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:11Z","lastTransitionTime":"2025-12-11T18:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.372875 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.372920 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.372931 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.372946 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.372957 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:11Z","lastTransitionTime":"2025-12-11T18:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.446032 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/0.log" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.450309 4877 generic.go:334] "Generic (PLEG): container finished" podID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerID="0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc" exitCode=1 Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.450448 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerDied","Data":"0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc"} Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.452066 4877 scope.go:117] "RemoveContainer" containerID="0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.481548 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.481832 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.481899 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.481934 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.481971 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.481994 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:11Z","lastTransitionTime":"2025-12-11T18:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.501365 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.520292 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.536066 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.555191 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.572682 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.585436 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.585726 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.586095 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.586350 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.586681 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:11Z","lastTransitionTime":"2025-12-11T18:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.594878 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:10Z\\\",\\\"message\\\":\\\"o:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955246 6172 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.955545 6172 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955603 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 18:01:09.955638 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 18:01:09.955674 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 18:01:09.955687 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 18:01:09.955938 6172 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956181 6172 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956651 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 18:01:09.956730 6172 factory.go:656] Stopping watch factory\\\\nI1211 18:01:09.956749 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1211 18:01:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.621855 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.638420 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.653914 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.664417 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.676618 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.689262 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.690072 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.690130 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.690140 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.690160 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.690174 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:11Z","lastTransitionTime":"2025-12-11T18:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.703067 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.712976 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:11Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.793263 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.793318 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.793334 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.793352 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.793366 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:11Z","lastTransitionTime":"2025-12-11T18:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.896266 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.896320 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.896331 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.896353 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.896366 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:11Z","lastTransitionTime":"2025-12-11T18:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.998468 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.998530 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.998549 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.998574 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:11 crc kubenswrapper[4877]: I1211 18:01:11.998592 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:11Z","lastTransitionTime":"2025-12-11T18:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.100745 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.100785 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.100793 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.100807 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.100820 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:12Z","lastTransitionTime":"2025-12-11T18:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.203050 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.203093 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.203104 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.203119 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.203130 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:12Z","lastTransitionTime":"2025-12-11T18:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.214657 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.214722 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.214750 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:12 crc kubenswrapper[4877]: E1211 18:01:12.214852 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:12 crc kubenswrapper[4877]: E1211 18:01:12.214943 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:12 crc kubenswrapper[4877]: E1211 18:01:12.215112 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.305530 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.305562 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.305571 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.305586 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.305596 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:12Z","lastTransitionTime":"2025-12-11T18:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.363911 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.380197 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.393129 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.394216 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.394282 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.394306 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.394336 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.394366 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:12Z","lastTransitionTime":"2025-12-11T18:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.403430 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: E1211 18:01:12.406847 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.410086 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.410147 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.410172 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.410199 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.410221 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:12Z","lastTransitionTime":"2025-12-11T18:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.417915 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: E1211 18:01:12.424632 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.428744 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.428781 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.428793 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.428812 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.428825 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:12Z","lastTransitionTime":"2025-12-11T18:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.430432 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: E1211 18:01:12.441047 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.442989 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.445154 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.445185 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.445195 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.445209 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.445219 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:12Z","lastTransitionTime":"2025-12-11T18:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.455398 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/0.log" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.458047 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerStarted","Data":"66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8"} Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.458168 4877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 18:01:12 crc kubenswrapper[4877]: E1211 18:01:12.460403 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.460864 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:10Z\\\",\\\"message\\\":\\\"o:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955246 6172 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.955545 6172 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955603 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 18:01:09.955638 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 18:01:09.955674 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 18:01:09.955687 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 18:01:09.955938 6172 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956181 6172 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956651 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 18:01:09.956730 6172 factory.go:656] Stopping watch factory\\\\nI1211 18:01:09.956749 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1211 18:01:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.463925 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.463951 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.463961 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.463973 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.463984 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:12Z","lastTransitionTime":"2025-12-11T18:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:12 crc kubenswrapper[4877]: E1211 18:01:12.476906 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: E1211 18:01:12.477024 4877 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.478592 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.478623 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.478632 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.478646 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.478655 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:12Z","lastTransitionTime":"2025-12-11T18:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.480064 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.495268 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.509794 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.522115 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.534594 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.546490 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.557120 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.567499 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.577173 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.580713 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.580760 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.580774 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.580790 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.580800 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:12Z","lastTransitionTime":"2025-12-11T18:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.590537 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.604988 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.617149 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.641177 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.659301 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.674931 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.683086 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.683123 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.683132 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.683148 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.683160 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:12Z","lastTransitionTime":"2025-12-11T18:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.690063 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.705707 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.733867 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:10Z\\\",\\\"message\\\":\\\"o:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955246 6172 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.955545 6172 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955603 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 18:01:09.955638 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 18:01:09.955674 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 18:01:09.955687 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 18:01:09.955938 6172 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956181 6172 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956651 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 18:01:09.956730 6172 factory.go:656] Stopping watch factory\\\\nI1211 18:01:09.956749 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1211 18:01:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.747911 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.762530 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.780044 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.785848 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.785914 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.785927 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.785968 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.785981 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:12Z","lastTransitionTime":"2025-12-11T18:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.797076 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.810626 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:12Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.889245 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.889286 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.889297 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.889313 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.889324 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:12Z","lastTransitionTime":"2025-12-11T18:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.992565 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.992611 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.992620 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.992634 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:12 crc kubenswrapper[4877]: I1211 18:01:12.992644 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:12Z","lastTransitionTime":"2025-12-11T18:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.095756 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.095792 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.095802 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.095816 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.095827 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:13Z","lastTransitionTime":"2025-12-11T18:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.198587 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.198638 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.198655 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.198676 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.198693 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:13Z","lastTransitionTime":"2025-12-11T18:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.302318 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.302410 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.302425 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.302444 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.302459 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:13Z","lastTransitionTime":"2025-12-11T18:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.405666 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.405732 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.405751 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.405775 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.405792 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:13Z","lastTransitionTime":"2025-12-11T18:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.459347 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf"] Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.459983 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.462368 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.462544 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.472942 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/78f90c0d-ae7f-4ca3-acd8-2219b9316c52-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qtfsf\" (UID: \"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.473024 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/78f90c0d-ae7f-4ca3-acd8-2219b9316c52-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qtfsf\" (UID: \"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.473076 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgmv6\" (UniqueName: \"kubernetes.io/projected/78f90c0d-ae7f-4ca3-acd8-2219b9316c52-kube-api-access-sgmv6\") pod \"ovnkube-control-plane-749d76644c-qtfsf\" (UID: \"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.473147 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/78f90c0d-ae7f-4ca3-acd8-2219b9316c52-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qtfsf\" (UID: \"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.480565 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.500401 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.508646 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.508698 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.508715 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.508739 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.508756 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:13Z","lastTransitionTime":"2025-12-11T18:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.518741 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.549773 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.570540 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.574086 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgmv6\" (UniqueName: \"kubernetes.io/projected/78f90c0d-ae7f-4ca3-acd8-2219b9316c52-kube-api-access-sgmv6\") pod \"ovnkube-control-plane-749d76644c-qtfsf\" (UID: \"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.574164 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/78f90c0d-ae7f-4ca3-acd8-2219b9316c52-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qtfsf\" (UID: \"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.574231 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/78f90c0d-ae7f-4ca3-acd8-2219b9316c52-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qtfsf\" (UID: \"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.574287 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/78f90c0d-ae7f-4ca3-acd8-2219b9316c52-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qtfsf\" (UID: \"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.575226 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/78f90c0d-ae7f-4ca3-acd8-2219b9316c52-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qtfsf\" (UID: \"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.575673 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/78f90c0d-ae7f-4ca3-acd8-2219b9316c52-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qtfsf\" (UID: \"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.586341 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/78f90c0d-ae7f-4ca3-acd8-2219b9316c52-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qtfsf\" (UID: \"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.590842 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.591477 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgmv6\" (UniqueName: \"kubernetes.io/projected/78f90c0d-ae7f-4ca3-acd8-2219b9316c52-kube-api-access-sgmv6\") pod \"ovnkube-control-plane-749d76644c-qtfsf\" (UID: \"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.605809 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.611919 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.611981 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.612001 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.612026 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.612045 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:13Z","lastTransitionTime":"2025-12-11T18:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.622115 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.650123 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:10Z\\\",\\\"message\\\":\\\"o:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955246 6172 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.955545 6172 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955603 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 18:01:09.955638 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 18:01:09.955674 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 18:01:09.955687 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 18:01:09.955938 6172 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956181 6172 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956651 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 18:01:09.956730 6172 factory.go:656] Stopping watch factory\\\\nI1211 18:01:09.956749 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1211 18:01:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.664071 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.685990 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.700205 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.714854 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.714918 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.714929 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.714950 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.714965 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:13Z","lastTransitionTime":"2025-12-11T18:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.716717 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.732562 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.745461 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.766936 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:13Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.782948 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" Dec 11 18:01:13 crc kubenswrapper[4877]: W1211 18:01:13.819007 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78f90c0d_ae7f_4ca3_acd8_2219b9316c52.slice/crio-604c4ed6b0bcd020e17c8a6a5c98a550be5f87f3c4ebe5afbec61aadaa9260c7 WatchSource:0}: Error finding container 604c4ed6b0bcd020e17c8a6a5c98a550be5f87f3c4ebe5afbec61aadaa9260c7: Status 404 returned error can't find the container with id 604c4ed6b0bcd020e17c8a6a5c98a550be5f87f3c4ebe5afbec61aadaa9260c7 Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.819197 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.819234 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.819245 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.819263 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.819273 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:13Z","lastTransitionTime":"2025-12-11T18:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.922669 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.922707 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.922717 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.922731 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:13 crc kubenswrapper[4877]: I1211 18:01:13.922740 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:13Z","lastTransitionTime":"2025-12-11T18:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.025144 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.025205 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.025227 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.025246 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.025260 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:14Z","lastTransitionTime":"2025-12-11T18:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.127946 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.127985 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.127996 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.128011 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.128022 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:14Z","lastTransitionTime":"2025-12-11T18:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.180475 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.180665 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:01:30.180649388 +0000 UTC m=+51.206893432 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.214760 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.214801 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.214763 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.214885 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.214922 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.215023 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.230290 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.230316 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.230326 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.230338 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.230347 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:14Z","lastTransitionTime":"2025-12-11T18:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.281699 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.281758 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.281794 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.281821 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.281960 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.281980 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.281993 4877 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.282044 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:30.282027365 +0000 UTC m=+51.308271409 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.282067 4877 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.282090 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.282143 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.282169 4877 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.282108 4877 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.282169 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:30.282142199 +0000 UTC m=+51.308386283 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.282262 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:30.282238301 +0000 UTC m=+51.308482385 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.282286 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:30.282275142 +0000 UTC m=+51.308519286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.333737 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.333790 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.333807 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.333830 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.333848 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:14Z","lastTransitionTime":"2025-12-11T18:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.436251 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.436312 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.436339 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.436368 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.436476 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:14Z","lastTransitionTime":"2025-12-11T18:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.472666 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/1.log" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.473298 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/0.log" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.476591 4877 generic.go:334] "Generic (PLEG): container finished" podID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerID="66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8" exitCode=1 Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.476661 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerDied","Data":"66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8"} Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.476716 4877 scope.go:117] "RemoveContainer" containerID="0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.477896 4877 scope.go:117] "RemoveContainer" containerID="66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.478116 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" event={"ID":"78f90c0d-ae7f-4ca3-acd8-2219b9316c52","Type":"ContainerStarted","Data":"313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d"} Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.478133 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.478155 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" event={"ID":"78f90c0d-ae7f-4ca3-acd8-2219b9316c52","Type":"ContainerStarted","Data":"604c4ed6b0bcd020e17c8a6a5c98a550be5f87f3c4ebe5afbec61aadaa9260c7"} Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.494584 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.510018 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.523830 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.534854 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.538548 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.538576 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.538585 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.538598 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.538608 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:14Z","lastTransitionTime":"2025-12-11T18:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.553168 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.583244 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.600532 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-sn9xv"] Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.601045 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.601101 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.601135 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.614258 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.626788 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.641472 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.641510 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.641523 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.641541 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.641554 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:14Z","lastTransitionTime":"2025-12-11T18:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.655816 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.681541 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:10Z\\\",\\\"message\\\":\\\"o:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955246 6172 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.955545 6172 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955603 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 18:01:09.955638 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 18:01:09.955674 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 18:01:09.955687 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 18:01:09.955938 6172 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956181 6172 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956651 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 18:01:09.956730 6172 factory.go:656] Stopping watch factory\\\\nI1211 18:01:09.956749 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1211 18:01:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"message\\\":\\\"wfnt after 0 failed attempt(s)\\\\nI1211 18:01:12.307780 6316 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-gwfnt\\\\nI1211 18:01:12.307784 6316 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307787 6316 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-665tk in node crc\\\\nI1211 18:01:12.307802 6316 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307804 6316 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-665tk after 0 failed attempt(s)\\\\nF1211 18:01:12.307806 6316 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.684589 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkk5q\" (UniqueName: \"kubernetes.io/projected/fa0b7b99-8d0a-48ad-9f98-da5947644472-kube-api-access-fkk5q\") pod \"network-metrics-daemon-sn9xv\" (UID: \"fa0b7b99-8d0a-48ad-9f98-da5947644472\") " pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.684653 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs\") pod \"network-metrics-daemon-sn9xv\" (UID: \"fa0b7b99-8d0a-48ad-9f98-da5947644472\") " pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.702121 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.721431 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.738085 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.744222 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.744242 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.744249 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.744263 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.744271 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:14Z","lastTransitionTime":"2025-12-11T18:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.754244 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.770626 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.785333 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs\") pod \"network-metrics-daemon-sn9xv\" (UID: \"fa0b7b99-8d0a-48ad-9f98-da5947644472\") " pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.785424 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkk5q\" (UniqueName: \"kubernetes.io/projected/fa0b7b99-8d0a-48ad-9f98-da5947644472-kube-api-access-fkk5q\") pod \"network-metrics-daemon-sn9xv\" (UID: \"fa0b7b99-8d0a-48ad-9f98-da5947644472\") " pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.785830 4877 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:01:14 crc kubenswrapper[4877]: E1211 18:01:14.785887 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs podName:fa0b7b99-8d0a-48ad-9f98-da5947644472 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:15.285870099 +0000 UTC m=+36.312114143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs") pod "network-metrics-daemon-sn9xv" (UID: "fa0b7b99-8d0a-48ad-9f98-da5947644472") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.788165 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.804923 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.809758 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkk5q\" (UniqueName: \"kubernetes.io/projected/fa0b7b99-8d0a-48ad-9f98-da5947644472-kube-api-access-fkk5q\") pod \"network-metrics-daemon-sn9xv\" (UID: \"fa0b7b99-8d0a-48ad-9f98-da5947644472\") " pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.821350 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.845994 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:10Z\\\",\\\"message\\\":\\\"o:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955246 6172 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.955545 6172 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955603 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 18:01:09.955638 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 18:01:09.955674 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 18:01:09.955687 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 18:01:09.955938 6172 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956181 6172 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956651 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 18:01:09.956730 6172 factory.go:656] Stopping watch factory\\\\nI1211 18:01:09.956749 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1211 18:01:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"message\\\":\\\"wfnt after 0 failed attempt(s)\\\\nI1211 18:01:12.307780 6316 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-gwfnt\\\\nI1211 18:01:12.307784 6316 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307787 6316 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-665tk in node crc\\\\nI1211 18:01:12.307802 6316 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307804 6316 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-665tk after 0 failed attempt(s)\\\\nF1211 18:01:12.307806 6316 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.847356 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.847430 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.847445 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.847463 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.847474 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:14Z","lastTransitionTime":"2025-12-11T18:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.859942 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.881953 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.894765 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.905327 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.918563 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.932498 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.947522 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.949360 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.949433 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.949446 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.949465 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.949477 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:14Z","lastTransitionTime":"2025-12-11T18:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.963784 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.976410 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:14 crc kubenswrapper[4877]: I1211 18:01:14.988749 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.000468 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:14Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.013889 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.029355 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.052314 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.052361 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.052389 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.052406 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.052416 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:15Z","lastTransitionTime":"2025-12-11T18:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.154924 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.154966 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.154978 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.154994 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.155009 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:15Z","lastTransitionTime":"2025-12-11T18:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.257805 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.257859 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.257875 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.257897 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.257914 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:15Z","lastTransitionTime":"2025-12-11T18:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.289828 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs\") pod \"network-metrics-daemon-sn9xv\" (UID: \"fa0b7b99-8d0a-48ad-9f98-da5947644472\") " pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:15 crc kubenswrapper[4877]: E1211 18:01:15.290008 4877 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:01:15 crc kubenswrapper[4877]: E1211 18:01:15.290128 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs podName:fa0b7b99-8d0a-48ad-9f98-da5947644472 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:16.290102564 +0000 UTC m=+37.316346638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs") pod "network-metrics-daemon-sn9xv" (UID: "fa0b7b99-8d0a-48ad-9f98-da5947644472") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.360076 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.360131 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.360153 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.360176 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.360193 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:15Z","lastTransitionTime":"2025-12-11T18:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.463417 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.463908 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.463918 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.463937 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.463951 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:15Z","lastTransitionTime":"2025-12-11T18:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.483849 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/1.log" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.496723 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" event={"ID":"78f90c0d-ae7f-4ca3-acd8-2219b9316c52","Type":"ContainerStarted","Data":"e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90"} Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.519067 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.534590 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.545448 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.561240 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.566436 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.566476 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.566489 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.566503 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.566514 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:15Z","lastTransitionTime":"2025-12-11T18:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.582725 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.606252 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:10Z\\\",\\\"message\\\":\\\"o:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955246 6172 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.955545 6172 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955603 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 18:01:09.955638 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 18:01:09.955674 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 18:01:09.955687 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 18:01:09.955938 6172 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956181 6172 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956651 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 18:01:09.956730 6172 factory.go:656] Stopping watch factory\\\\nI1211 18:01:09.956749 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1211 18:01:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"message\\\":\\\"wfnt after 0 failed attempt(s)\\\\nI1211 18:01:12.307780 6316 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-gwfnt\\\\nI1211 18:01:12.307784 6316 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307787 6316 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-665tk in node crc\\\\nI1211 18:01:12.307802 6316 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307804 6316 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-665tk after 0 failed attempt(s)\\\\nF1211 18:01:12.307806 6316 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.618985 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.642652 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.659735 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.670151 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.670208 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.670225 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.670248 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.670271 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:15Z","lastTransitionTime":"2025-12-11T18:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.673549 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.685315 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.703890 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.725451 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.744269 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.763432 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.775734 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.775802 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.775819 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.775847 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.775864 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:15Z","lastTransitionTime":"2025-12-11T18:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.778025 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.791726 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:15Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.879104 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.879154 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.879165 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.879183 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.879197 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:15Z","lastTransitionTime":"2025-12-11T18:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.981801 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.981848 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.981859 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.981875 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:15 crc kubenswrapper[4877]: I1211 18:01:15.981886 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:15Z","lastTransitionTime":"2025-12-11T18:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.084436 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.084515 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.084534 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.084555 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.084573 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:16Z","lastTransitionTime":"2025-12-11T18:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.187819 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.187886 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.187902 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.187926 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.187943 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:16Z","lastTransitionTime":"2025-12-11T18:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.214575 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.214588 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.214646 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.214664 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:16 crc kubenswrapper[4877]: E1211 18:01:16.214846 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:16 crc kubenswrapper[4877]: E1211 18:01:16.214990 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:16 crc kubenswrapper[4877]: E1211 18:01:16.215245 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:16 crc kubenswrapper[4877]: E1211 18:01:16.215811 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.291155 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.291206 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.291218 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.291234 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.291245 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:16Z","lastTransitionTime":"2025-12-11T18:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.303947 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs\") pod \"network-metrics-daemon-sn9xv\" (UID: \"fa0b7b99-8d0a-48ad-9f98-da5947644472\") " pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:16 crc kubenswrapper[4877]: E1211 18:01:16.304069 4877 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:01:16 crc kubenswrapper[4877]: E1211 18:01:16.304112 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs podName:fa0b7b99-8d0a-48ad-9f98-da5947644472 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:18.304099063 +0000 UTC m=+39.330343107 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs") pod "network-metrics-daemon-sn9xv" (UID: "fa0b7b99-8d0a-48ad-9f98-da5947644472") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.394552 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.394618 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.394634 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.394661 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.394678 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:16Z","lastTransitionTime":"2025-12-11T18:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.497199 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.497308 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.497334 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.497485 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.497529 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:16Z","lastTransitionTime":"2025-12-11T18:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.602415 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.602540 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.602566 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.602663 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.602749 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:16Z","lastTransitionTime":"2025-12-11T18:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.706458 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.706536 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.706559 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.706586 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.706606 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:16Z","lastTransitionTime":"2025-12-11T18:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.809465 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.809545 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.809557 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.809577 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.809597 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:16Z","lastTransitionTime":"2025-12-11T18:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.911957 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.912044 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.912068 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.912097 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:16 crc kubenswrapper[4877]: I1211 18:01:16.912119 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:16Z","lastTransitionTime":"2025-12-11T18:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.014473 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.014549 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.014581 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.014610 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.014633 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:17Z","lastTransitionTime":"2025-12-11T18:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.122115 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.122178 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.122195 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.122221 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.122245 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:17Z","lastTransitionTime":"2025-12-11T18:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.223942 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.224006 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.224024 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.224046 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.224063 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:17Z","lastTransitionTime":"2025-12-11T18:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.326980 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.327041 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.327063 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.327090 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.327111 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:17Z","lastTransitionTime":"2025-12-11T18:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.430069 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.430108 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.430117 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.430130 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.430140 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:17Z","lastTransitionTime":"2025-12-11T18:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.532700 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.532745 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.532756 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.532775 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.532787 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:17Z","lastTransitionTime":"2025-12-11T18:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.635440 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.635558 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.635578 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.635603 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.635620 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:17Z","lastTransitionTime":"2025-12-11T18:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.738209 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.738286 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.738305 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.738330 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.738347 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:17Z","lastTransitionTime":"2025-12-11T18:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.840491 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.840535 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.840547 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.840566 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.840578 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:17Z","lastTransitionTime":"2025-12-11T18:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.943071 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.943205 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.943229 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.943258 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:17 crc kubenswrapper[4877]: I1211 18:01:17.943281 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:17Z","lastTransitionTime":"2025-12-11T18:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.046116 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.046173 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.046189 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.046211 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.046229 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:18Z","lastTransitionTime":"2025-12-11T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.149303 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.149354 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.149364 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.149392 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.149402 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:18Z","lastTransitionTime":"2025-12-11T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.214561 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.214559 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:18 crc kubenswrapper[4877]: E1211 18:01:18.214696 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.214582 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:18 crc kubenswrapper[4877]: E1211 18:01:18.214751 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.214564 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:18 crc kubenswrapper[4877]: E1211 18:01:18.214896 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:18 crc kubenswrapper[4877]: E1211 18:01:18.215020 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.252166 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.252197 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.252207 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.252221 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.252232 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:18Z","lastTransitionTime":"2025-12-11T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.325163 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs\") pod \"network-metrics-daemon-sn9xv\" (UID: \"fa0b7b99-8d0a-48ad-9f98-da5947644472\") " pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:18 crc kubenswrapper[4877]: E1211 18:01:18.325438 4877 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:01:18 crc kubenswrapper[4877]: E1211 18:01:18.325582 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs podName:fa0b7b99-8d0a-48ad-9f98-da5947644472 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:22.325556614 +0000 UTC m=+43.351800658 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs") pod "network-metrics-daemon-sn9xv" (UID: "fa0b7b99-8d0a-48ad-9f98-da5947644472") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.354832 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.354897 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.354912 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.354929 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.354939 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:18Z","lastTransitionTime":"2025-12-11T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.458402 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.458459 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.458473 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.458494 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.458509 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:18Z","lastTransitionTime":"2025-12-11T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.561839 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.561877 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.561891 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.561908 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.561918 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:18Z","lastTransitionTime":"2025-12-11T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.664849 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.664894 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.664902 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.664916 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.664926 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:18Z","lastTransitionTime":"2025-12-11T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.767298 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.767342 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.767351 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.767365 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.767391 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:18Z","lastTransitionTime":"2025-12-11T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.869768 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.869811 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.869820 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.869833 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.869844 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:18Z","lastTransitionTime":"2025-12-11T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.971989 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.972049 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.972066 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.972089 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:18 crc kubenswrapper[4877]: I1211 18:01:18.972109 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:18Z","lastTransitionTime":"2025-12-11T18:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.074571 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.074625 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.074635 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.074656 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.074669 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:19Z","lastTransitionTime":"2025-12-11T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.177357 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.177401 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.177411 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.177424 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.177434 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:19Z","lastTransitionTime":"2025-12-11T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.239215 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.257199 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.273755 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.280180 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.280244 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.280252 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.280270 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.280283 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:19Z","lastTransitionTime":"2025-12-11T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.302763 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.319557 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.330776 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.342321 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.355872 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.372446 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.382089 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.382132 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.382143 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.382161 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.382173 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:19Z","lastTransitionTime":"2025-12-11T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.383065 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.399850 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.426138 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.441500 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.456860 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.472231 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.484419 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.484453 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.484462 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.484477 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.484487 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:19Z","lastTransitionTime":"2025-12-11T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.491020 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.514893 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fef9cb6eb20a9f60fc3ef47c955b3674b9a7c2be3fdebf18c05f1e8c86960fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:10Z\\\",\\\"message\\\":\\\"o:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955246 6172 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.955545 6172 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 18:01:09.955603 6172 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 18:01:09.955638 6172 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 18:01:09.955674 6172 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 18:01:09.955687 6172 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 18:01:09.955938 6172 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956181 6172 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 18:01:09.956651 6172 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 18:01:09.956730 6172 factory.go:656] Stopping watch factory\\\\nI1211 18:01:09.956749 6172 ovnkube.go:599] Stopped ovnkube\\\\nI1211 18:01:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"message\\\":\\\"wfnt after 0 failed attempt(s)\\\\nI1211 18:01:12.307780 6316 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-gwfnt\\\\nI1211 18:01:12.307784 6316 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307787 6316 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-665tk in node crc\\\\nI1211 18:01:12.307802 6316 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307804 6316 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-665tk after 0 failed attempt(s)\\\\nF1211 18:01:12.307806 6316 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:19Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.586669 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.586706 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.586715 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.586730 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.586740 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:19Z","lastTransitionTime":"2025-12-11T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.689244 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.689293 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.689304 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.689319 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.689329 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:19Z","lastTransitionTime":"2025-12-11T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.792203 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.792248 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.792258 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.792275 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.792287 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:19Z","lastTransitionTime":"2025-12-11T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.894777 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.894810 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.894818 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.894832 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.894841 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:19Z","lastTransitionTime":"2025-12-11T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.997360 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.997440 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.997460 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.997484 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:19 crc kubenswrapper[4877]: I1211 18:01:19.997507 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:19Z","lastTransitionTime":"2025-12-11T18:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.100488 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.100543 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.100552 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.100572 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.100590 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:20Z","lastTransitionTime":"2025-12-11T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.202449 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.202488 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.202497 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.202512 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.202522 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:20Z","lastTransitionTime":"2025-12-11T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.215212 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:20 crc kubenswrapper[4877]: E1211 18:01:20.215303 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.215584 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:20 crc kubenswrapper[4877]: E1211 18:01:20.215745 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.215840 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:20 crc kubenswrapper[4877]: E1211 18:01:20.215901 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.215938 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:20 crc kubenswrapper[4877]: E1211 18:01:20.215983 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.305108 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.305147 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.305427 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.305448 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.305458 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:20Z","lastTransitionTime":"2025-12-11T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.408315 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.408484 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.408513 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.408545 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.408569 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:20Z","lastTransitionTime":"2025-12-11T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.512157 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.512223 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.512235 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.512252 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.512266 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:20Z","lastTransitionTime":"2025-12-11T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.615043 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.615089 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.615096 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.615110 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.615120 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:20Z","lastTransitionTime":"2025-12-11T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.717578 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.717622 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.717633 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.717647 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.717656 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:20Z","lastTransitionTime":"2025-12-11T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.821064 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.821137 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.821148 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.821170 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.821184 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:20Z","lastTransitionTime":"2025-12-11T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.923566 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.923615 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.923628 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.923672 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:20 crc kubenswrapper[4877]: I1211 18:01:20.923684 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:20Z","lastTransitionTime":"2025-12-11T18:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.025944 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.025997 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.026011 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.026028 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.026040 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:21Z","lastTransitionTime":"2025-12-11T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.129325 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.129423 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.129446 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.129477 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.129502 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:21Z","lastTransitionTime":"2025-12-11T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.232260 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.232317 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.232340 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.232367 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.232434 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:21Z","lastTransitionTime":"2025-12-11T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.334770 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.334830 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.334849 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.334872 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.334891 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:21Z","lastTransitionTime":"2025-12-11T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.438154 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.438251 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.438263 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.438284 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.438296 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:21Z","lastTransitionTime":"2025-12-11T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.540848 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.540893 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.540903 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.540918 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.540931 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:21Z","lastTransitionTime":"2025-12-11T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.643417 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.643487 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.643501 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.643527 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.643541 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:21Z","lastTransitionTime":"2025-12-11T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.746165 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.746213 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.746224 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.746241 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.746254 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:21Z","lastTransitionTime":"2025-12-11T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.849951 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.850006 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.850023 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.850049 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.850069 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:21Z","lastTransitionTime":"2025-12-11T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.953220 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.953257 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.953271 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.953294 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:21 crc kubenswrapper[4877]: I1211 18:01:21.953311 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:21Z","lastTransitionTime":"2025-12-11T18:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.056410 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.056488 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.056546 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.056574 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.056594 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:22Z","lastTransitionTime":"2025-12-11T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.159440 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.159486 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.159498 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.159514 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.159527 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:22Z","lastTransitionTime":"2025-12-11T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.214209 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.214221 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.214271 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.214405 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:22 crc kubenswrapper[4877]: E1211 18:01:22.214501 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:22 crc kubenswrapper[4877]: E1211 18:01:22.214589 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:22 crc kubenswrapper[4877]: E1211 18:01:22.214729 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:22 crc kubenswrapper[4877]: E1211 18:01:22.214838 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.261824 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.261880 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.261894 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.261913 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.261926 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:22Z","lastTransitionTime":"2025-12-11T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.371044 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.371097 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.371125 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.371149 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.371167 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:22Z","lastTransitionTime":"2025-12-11T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.401876 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs\") pod \"network-metrics-daemon-sn9xv\" (UID: \"fa0b7b99-8d0a-48ad-9f98-da5947644472\") " pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:22 crc kubenswrapper[4877]: E1211 18:01:22.402149 4877 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:01:22 crc kubenswrapper[4877]: E1211 18:01:22.402267 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs podName:fa0b7b99-8d0a-48ad-9f98-da5947644472 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:30.402240753 +0000 UTC m=+51.428484827 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs") pod "network-metrics-daemon-sn9xv" (UID: "fa0b7b99-8d0a-48ad-9f98-da5947644472") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.473872 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.473931 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.473947 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.473974 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.473990 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:22Z","lastTransitionTime":"2025-12-11T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.577074 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.577155 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.577174 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.577199 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.577217 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:22Z","lastTransitionTime":"2025-12-11T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.680189 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.680246 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.680260 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.680283 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.680295 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:22Z","lastTransitionTime":"2025-12-11T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.783675 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.783720 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.783732 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.783752 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.783763 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:22Z","lastTransitionTime":"2025-12-11T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.805953 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.805999 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.806011 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.806028 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.806048 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:22Z","lastTransitionTime":"2025-12-11T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:22 crc kubenswrapper[4877]: E1211 18:01:22.827189 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:22Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.832887 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.832946 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.832964 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.832990 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.833007 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:22Z","lastTransitionTime":"2025-12-11T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:22 crc kubenswrapper[4877]: E1211 18:01:22.852068 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:22Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.856241 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.856358 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.856404 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.856437 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.856456 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:22Z","lastTransitionTime":"2025-12-11T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:22 crc kubenswrapper[4877]: E1211 18:01:22.876002 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:22Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.880832 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.880888 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.880903 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.880923 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.880936 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:22Z","lastTransitionTime":"2025-12-11T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:22 crc kubenswrapper[4877]: E1211 18:01:22.899325 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:22Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.903649 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.903697 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.903707 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.903725 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.903736 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:22Z","lastTransitionTime":"2025-12-11T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:22 crc kubenswrapper[4877]: E1211 18:01:22.919762 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:22Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:22 crc kubenswrapper[4877]: E1211 18:01:22.919934 4877 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.921700 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.921745 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.921791 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.921816 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:22 crc kubenswrapper[4877]: I1211 18:01:22.921833 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:22Z","lastTransitionTime":"2025-12-11T18:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.024074 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.024141 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.024150 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.024165 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.024176 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:23Z","lastTransitionTime":"2025-12-11T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.126941 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.126997 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.127006 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.127023 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.127033 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:23Z","lastTransitionTime":"2025-12-11T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.229230 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.229271 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.229283 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.229298 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.229311 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:23Z","lastTransitionTime":"2025-12-11T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.332456 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.332501 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.332510 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.332525 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.332537 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:23Z","lastTransitionTime":"2025-12-11T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.436103 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.436166 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.436181 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.436204 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.436222 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:23Z","lastTransitionTime":"2025-12-11T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.538593 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.538653 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.538665 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.538686 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.538702 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:23Z","lastTransitionTime":"2025-12-11T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.641802 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.641945 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.641966 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.641992 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.642011 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:23Z","lastTransitionTime":"2025-12-11T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.744171 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.744234 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.744251 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.744276 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.744293 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:23Z","lastTransitionTime":"2025-12-11T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.848255 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.848336 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.848346 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.848361 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.848481 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:23Z","lastTransitionTime":"2025-12-11T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.950695 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.950768 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.950785 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.950811 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:23 crc kubenswrapper[4877]: I1211 18:01:23.950828 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:23Z","lastTransitionTime":"2025-12-11T18:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.053191 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.053261 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.053283 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.053315 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.053353 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:24Z","lastTransitionTime":"2025-12-11T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.157163 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.157209 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.157223 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.157238 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.157250 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:24Z","lastTransitionTime":"2025-12-11T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.215153 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.215165 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:24 crc kubenswrapper[4877]: E1211 18:01:24.215330 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.215177 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.215469 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:24 crc kubenswrapper[4877]: E1211 18:01:24.215577 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:24 crc kubenswrapper[4877]: E1211 18:01:24.215660 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:24 crc kubenswrapper[4877]: E1211 18:01:24.215725 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.259726 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.259788 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.259806 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.259832 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.259850 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:24Z","lastTransitionTime":"2025-12-11T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.364203 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.364295 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.364316 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.364342 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.364357 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:24Z","lastTransitionTime":"2025-12-11T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.468064 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.468138 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.468156 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.468179 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.468257 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:24Z","lastTransitionTime":"2025-12-11T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.571210 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.571274 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.571284 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.571304 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.571316 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:24Z","lastTransitionTime":"2025-12-11T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.673486 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.673575 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.673591 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.673616 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.673639 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:24Z","lastTransitionTime":"2025-12-11T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.775924 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.775989 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.776010 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.776040 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.776062 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:24Z","lastTransitionTime":"2025-12-11T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.879178 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.879211 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.879220 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.879233 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.879242 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:24Z","lastTransitionTime":"2025-12-11T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.981965 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.982524 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.982676 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.982819 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:24 crc kubenswrapper[4877]: I1211 18:01:24.983044 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:24Z","lastTransitionTime":"2025-12-11T18:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.085802 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.085874 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.085883 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.085898 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.085910 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:25Z","lastTransitionTime":"2025-12-11T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.188257 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.188297 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.188307 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.188320 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.188330 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:25Z","lastTransitionTime":"2025-12-11T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.290616 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.290692 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.290725 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.290757 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.290779 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:25Z","lastTransitionTime":"2025-12-11T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.393644 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.393697 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.393708 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.393724 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.393736 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:25Z","lastTransitionTime":"2025-12-11T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.496442 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.496518 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.496541 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.496573 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.496595 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:25Z","lastTransitionTime":"2025-12-11T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.598883 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.598926 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.598936 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.598952 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.598963 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:25Z","lastTransitionTime":"2025-12-11T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.701600 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.701650 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.701659 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.701672 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.701682 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:25Z","lastTransitionTime":"2025-12-11T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.806208 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.806257 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.806266 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.806281 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.806291 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:25Z","lastTransitionTime":"2025-12-11T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.909032 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.909095 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.909116 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.909138 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:25 crc kubenswrapper[4877]: I1211 18:01:25.909155 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:25Z","lastTransitionTime":"2025-12-11T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.011881 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.011922 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.011935 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.011952 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.011964 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:26Z","lastTransitionTime":"2025-12-11T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.114948 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.114985 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.114994 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.115008 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.115018 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:26Z","lastTransitionTime":"2025-12-11T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.215296 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.215361 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.215428 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:26 crc kubenswrapper[4877]: E1211 18:01:26.215550 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.215575 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:26 crc kubenswrapper[4877]: E1211 18:01:26.215710 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:26 crc kubenswrapper[4877]: E1211 18:01:26.216259 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:26 crc kubenswrapper[4877]: E1211 18:01:26.216074 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.217501 4877 scope.go:117] "RemoveContainer" containerID="66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.219236 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.219428 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.219461 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.219541 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.219570 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:26Z","lastTransitionTime":"2025-12-11T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.241695 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.271717 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"message\\\":\\\"wfnt after 0 failed attempt(s)\\\\nI1211 18:01:12.307780 6316 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-gwfnt\\\\nI1211 18:01:12.307784 6316 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307787 6316 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-665tk in node crc\\\\nI1211 18:01:12.307802 6316 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307804 6316 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-665tk after 0 failed attempt(s)\\\\nF1211 18:01:12.307806 6316 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.292753 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.323293 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.323445 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.323546 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.323635 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.323663 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:26Z","lastTransitionTime":"2025-12-11T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.329364 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.354057 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.372532 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.390179 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.404161 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.417558 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.425865 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.425914 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.425931 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.425954 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.425971 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:26Z","lastTransitionTime":"2025-12-11T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.435942 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.458525 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.475000 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.486591 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.501465 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.527000 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.535711 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.535764 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.535784 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.535810 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.535831 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:26Z","lastTransitionTime":"2025-12-11T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.546612 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/1.log" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.579231 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.593995 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:26Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.637896 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.637926 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.637938 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.637952 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.637961 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:26Z","lastTransitionTime":"2025-12-11T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.740733 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.740773 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.740782 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.740796 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.740806 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:26Z","lastTransitionTime":"2025-12-11T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.843638 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.843700 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.843715 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.843738 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.843775 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:26Z","lastTransitionTime":"2025-12-11T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.946072 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.946125 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.946139 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.946159 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:26 crc kubenswrapper[4877]: I1211 18:01:26.946173 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:26Z","lastTransitionTime":"2025-12-11T18:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.048174 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.048201 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.048209 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.048220 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.048228 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:27Z","lastTransitionTime":"2025-12-11T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.149875 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.149910 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.149921 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.149935 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.149946 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:27Z","lastTransitionTime":"2025-12-11T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.252714 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.252757 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.252769 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.252785 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.252800 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:27Z","lastTransitionTime":"2025-12-11T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.355100 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.355146 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.355159 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.355176 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.355228 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:27Z","lastTransitionTime":"2025-12-11T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.458799 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.458842 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.458852 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.458870 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.458883 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:27Z","lastTransitionTime":"2025-12-11T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.558759 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/1.log" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.565898 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.566047 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.566143 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.566173 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.566194 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:27Z","lastTransitionTime":"2025-12-11T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.573682 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerStarted","Data":"42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816"} Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.573867 4877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.596212 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.615362 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.629646 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.645412 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.666645 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.668925 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.668953 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.668962 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.668977 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.668988 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:27Z","lastTransitionTime":"2025-12-11T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.698634 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"message\\\":\\\"wfnt after 0 failed attempt(s)\\\\nI1211 18:01:12.307780 6316 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-gwfnt\\\\nI1211 18:01:12.307784 6316 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307787 6316 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-665tk in node crc\\\\nI1211 18:01:12.307802 6316 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307804 6316 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-665tk after 0 failed attempt(s)\\\\nF1211 18:01:12.307806 6316 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.713663 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.735922 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.751744 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.767804 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.771822 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.771879 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.771895 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.771917 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.771934 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:27Z","lastTransitionTime":"2025-12-11T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.782775 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.797634 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.811741 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.828099 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.850006 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.864040 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.874698 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.874758 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.874767 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.874785 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.874797 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:27Z","lastTransitionTime":"2025-12-11T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.880217 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:27Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.925332 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.977497 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.977577 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.977602 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.977632 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:27 crc kubenswrapper[4877]: I1211 18:01:27.977675 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:27Z","lastTransitionTime":"2025-12-11T18:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.080965 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.081051 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.081068 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.081093 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.081111 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:28Z","lastTransitionTime":"2025-12-11T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.184027 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.184066 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.184074 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.184089 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.184099 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:28Z","lastTransitionTime":"2025-12-11T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.215356 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.215517 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.215416 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:28 crc kubenswrapper[4877]: E1211 18:01:28.215609 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.215418 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:28 crc kubenswrapper[4877]: E1211 18:01:28.215700 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:28 crc kubenswrapper[4877]: E1211 18:01:28.215842 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:28 crc kubenswrapper[4877]: E1211 18:01:28.216060 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.287658 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.287729 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.287751 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.287782 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.287805 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:28Z","lastTransitionTime":"2025-12-11T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.391083 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.391146 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.391171 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.391201 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.391226 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:28Z","lastTransitionTime":"2025-12-11T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.494215 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.494286 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.494309 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.494339 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.494361 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:28Z","lastTransitionTime":"2025-12-11T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.582941 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/2.log" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.583564 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/1.log" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.586979 4877 generic.go:334] "Generic (PLEG): container finished" podID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerID="42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816" exitCode=1 Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.587025 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerDied","Data":"42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816"} Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.587070 4877 scope.go:117] "RemoveContainer" containerID="66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.587796 4877 scope.go:117] "RemoveContainer" containerID="42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816" Dec 11 18:01:28 crc kubenswrapper[4877]: E1211 18:01:28.587974 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.598607 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.599321 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.599340 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.599369 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.599414 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:28Z","lastTransitionTime":"2025-12-11T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.612497 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.632566 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.652938 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.667989 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.684616 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.702006 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.702031 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.702040 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.702054 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.702064 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:28Z","lastTransitionTime":"2025-12-11T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.703498 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"message\\\":\\\"wfnt after 0 failed attempt(s)\\\\nI1211 18:01:12.307780 6316 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-gwfnt\\\\nI1211 18:01:12.307784 6316 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307787 6316 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-665tk in node crc\\\\nI1211 18:01:12.307802 6316 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307804 6316 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-665tk after 0 failed attempt(s)\\\\nF1211 18:01:12.307806 6316 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:27Z\\\",\\\"message\\\":\\\"services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nI1211 18:01:27.238624 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 18:01:27.238650 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc ann\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.718470 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.739913 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.757203 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.771855 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.784261 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.799791 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.804249 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.804303 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.804313 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.804329 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.804338 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:28Z","lastTransitionTime":"2025-12-11T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.816297 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.831543 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.848893 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.862834 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.874300 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:28Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.907414 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.907476 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.907497 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.907526 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:28 crc kubenswrapper[4877]: I1211 18:01:28.907544 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:28Z","lastTransitionTime":"2025-12-11T18:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.009806 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.009853 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.009865 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.009882 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.009893 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:29Z","lastTransitionTime":"2025-12-11T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.112184 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.112250 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.112266 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.112291 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.112311 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:29Z","lastTransitionTime":"2025-12-11T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.215594 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.216342 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.216405 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.216426 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.216441 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:29Z","lastTransitionTime":"2025-12-11T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.231441 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.248907 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.266171 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.283467 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.299832 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.314722 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.318858 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.318922 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.318943 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.318971 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.318989 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:29Z","lastTransitionTime":"2025-12-11T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.337310 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.371162 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66023470d923e757158118490c112d393d23f57a09dc58310c95e2203885e4b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"message\\\":\\\"wfnt after 0 failed attempt(s)\\\\nI1211 18:01:12.307780 6316 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-gwfnt\\\\nI1211 18:01:12.307784 6316 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307787 6316 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-665tk in node crc\\\\nI1211 18:01:12.307802 6316 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-dtgjg\\\\nI1211 18:01:12.307804 6316 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-665tk after 0 failed attempt(s)\\\\nF1211 18:01:12.307806 6316 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:27Z\\\",\\\"message\\\":\\\"services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nI1211 18:01:27.238624 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 18:01:27.238650 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc ann\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.397342 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.421216 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.421251 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.421260 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.421273 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.421282 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:29Z","lastTransitionTime":"2025-12-11T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.432890 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.452926 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.470537 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.485980 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.503793 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.523347 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.523824 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.523868 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.523887 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.523909 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.523926 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:29Z","lastTransitionTime":"2025-12-11T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.546910 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.566046 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.592421 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/2.log" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.596974 4877 scope.go:117] "RemoveContainer" containerID="42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816" Dec 11 18:01:29 crc kubenswrapper[4877]: E1211 18:01:29.597194 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.613602 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.628300 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.628334 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.628343 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.628362 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.628396 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:29Z","lastTransitionTime":"2025-12-11T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.630474 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.651532 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.668453 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.682015 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.693488 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.706077 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.722601 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.731312 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.731421 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.731448 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.731476 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.731502 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:29Z","lastTransitionTime":"2025-12-11T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.740018 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.751246 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.762521 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.793200 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.814134 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.832277 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.834584 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.834650 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.834669 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.834693 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.834711 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:29Z","lastTransitionTime":"2025-12-11T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.848155 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.869126 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.898308 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:27Z\\\",\\\"message\\\":\\\"services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nI1211 18:01:27.238624 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 18:01:27.238650 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc ann\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:29Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.937831 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.937894 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.937910 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.937936 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:29 crc kubenswrapper[4877]: I1211 18:01:29.937953 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:29Z","lastTransitionTime":"2025-12-11T18:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.042652 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.042742 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.042765 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.042795 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.042823 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:30Z","lastTransitionTime":"2025-12-11T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.145145 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.145191 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.145203 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.145222 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.145236 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:30Z","lastTransitionTime":"2025-12-11T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.185918 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.186178 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:02.186148974 +0000 UTC m=+83.212393048 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.214341 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.214368 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.214433 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.214406 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.214480 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.214607 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.214682 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.214751 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.247586 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.247638 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.247656 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.247681 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.247697 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:30Z","lastTransitionTime":"2025-12-11T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.287150 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.287216 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.287243 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.287271 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.287294 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.287319 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.287331 4877 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.287353 4877 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.287402 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 18:02:02.287361027 +0000 UTC m=+83.313605131 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.287410 4877 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.287422 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:02:02.287414739 +0000 UTC m=+83.313658863 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.287476 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:02:02.28746316 +0000 UTC m=+83.313707214 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.287544 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.287582 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.287604 4877 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.287687 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 18:02:02.287664705 +0000 UTC m=+83.313908789 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.350809 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.350868 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.350883 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.350904 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.350916 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:30Z","lastTransitionTime":"2025-12-11T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.454544 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.454600 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.454616 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.454642 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.454658 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:30Z","lastTransitionTime":"2025-12-11T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.489255 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs\") pod \"network-metrics-daemon-sn9xv\" (UID: \"fa0b7b99-8d0a-48ad-9f98-da5947644472\") " pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.489474 4877 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:01:30 crc kubenswrapper[4877]: E1211 18:01:30.489556 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs podName:fa0b7b99-8d0a-48ad-9f98-da5947644472 nodeName:}" failed. No retries permitted until 2025-12-11 18:01:46.489533276 +0000 UTC m=+67.515777360 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs") pod "network-metrics-daemon-sn9xv" (UID: "fa0b7b99-8d0a-48ad-9f98-da5947644472") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.557171 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.557233 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.557244 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.557259 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.557269 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:30Z","lastTransitionTime":"2025-12-11T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.659606 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.659674 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.659696 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.659724 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.659746 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:30Z","lastTransitionTime":"2025-12-11T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.762497 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.762541 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.762555 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.762572 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.762585 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:30Z","lastTransitionTime":"2025-12-11T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.867489 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.867821 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.867905 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.868101 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.868137 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:30Z","lastTransitionTime":"2025-12-11T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.970743 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.970789 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.970801 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.970822 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:30 crc kubenswrapper[4877]: I1211 18:01:30.970835 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:30Z","lastTransitionTime":"2025-12-11T18:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.073363 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.073462 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.073480 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.073503 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.073521 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:31Z","lastTransitionTime":"2025-12-11T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.176162 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.176228 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.176249 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.176278 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.176300 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:31Z","lastTransitionTime":"2025-12-11T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.279622 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.279739 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.279767 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.279794 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.279816 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:31Z","lastTransitionTime":"2025-12-11T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.382881 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.382943 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.382952 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.382972 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.382983 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:31Z","lastTransitionTime":"2025-12-11T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.486999 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.487096 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.487110 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.487134 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.487151 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:31Z","lastTransitionTime":"2025-12-11T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.590879 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.590939 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.590953 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.590982 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.590999 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:31Z","lastTransitionTime":"2025-12-11T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.695241 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.695329 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.695347 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.695415 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.695457 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:31Z","lastTransitionTime":"2025-12-11T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.798625 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.798688 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.798706 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.798732 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.798750 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:31Z","lastTransitionTime":"2025-12-11T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.900782 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.900823 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.900848 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.900864 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:31 crc kubenswrapper[4877]: I1211 18:01:31.900873 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:31Z","lastTransitionTime":"2025-12-11T18:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.003786 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.003848 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.003857 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.003925 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.003941 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:32Z","lastTransitionTime":"2025-12-11T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.106745 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.106809 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.106826 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.106854 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.106878 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:32Z","lastTransitionTime":"2025-12-11T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.209593 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.209669 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.209690 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.209713 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.209730 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:32Z","lastTransitionTime":"2025-12-11T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.214894 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.214950 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:32 crc kubenswrapper[4877]: E1211 18:01:32.214993 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.215027 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.215046 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:32 crc kubenswrapper[4877]: E1211 18:01:32.215153 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:32 crc kubenswrapper[4877]: E1211 18:01:32.215432 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:32 crc kubenswrapper[4877]: E1211 18:01:32.215610 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.312168 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.312220 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.312230 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.312245 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.312255 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:32Z","lastTransitionTime":"2025-12-11T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.414432 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.414488 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.414508 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.414530 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.414547 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:32Z","lastTransitionTime":"2025-12-11T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.518130 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.518194 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.518203 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.518220 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.518232 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:32Z","lastTransitionTime":"2025-12-11T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.621088 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.621157 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.621174 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.621199 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.621216 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:32Z","lastTransitionTime":"2025-12-11T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.723560 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.723597 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.723609 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.723623 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.723632 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:32Z","lastTransitionTime":"2025-12-11T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.826623 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.826694 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.826718 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.826745 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.826767 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:32Z","lastTransitionTime":"2025-12-11T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.929314 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.929438 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.929459 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.929481 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:32 crc kubenswrapper[4877]: I1211 18:01:32.929539 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:32Z","lastTransitionTime":"2025-12-11T18:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.031601 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.031669 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.031687 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.031711 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.031730 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.040858 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.040917 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.040934 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.040960 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.040984 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: E1211 18:01:33.060724 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.064683 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.064740 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.064759 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.064781 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.064798 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: E1211 18:01:33.078795 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.083606 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.083672 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.083699 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.083731 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.083755 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: E1211 18:01:33.097281 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.101444 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.101493 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.101514 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.101536 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.101548 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: E1211 18:01:33.114209 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.117945 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.117998 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.118017 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.118042 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.118060 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: E1211 18:01:33.132041 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: E1211 18:01:33.132263 4877 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.134655 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.134704 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.134722 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.134748 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.134765 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.236680 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.236746 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.236763 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.236785 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.236804 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.337531 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.340709 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.340754 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.340765 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.340781 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.340798 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.351324 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.360804 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.380836 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.397956 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.414909 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.429254 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.444626 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.444655 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.444670 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.444697 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.444709 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.458796 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:27Z\\\",\\\"message\\\":\\\"services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nI1211 18:01:27.238624 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 18:01:27.238650 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc ann\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.475857 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.493421 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.507799 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.522602 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.542478 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.552601 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.552662 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.552686 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.552717 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.552742 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.559193 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.571809 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.584304 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.602215 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.619612 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.632796 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:33Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.655115 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.655147 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.655156 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.655171 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.655182 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.758257 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.758299 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.758312 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.758329 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.758342 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.861232 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.861294 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.861307 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.861330 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.861341 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.964327 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.964395 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.964406 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.964421 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:33 crc kubenswrapper[4877]: I1211 18:01:33.964432 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:33Z","lastTransitionTime":"2025-12-11T18:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.067073 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.067124 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.067132 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.067147 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.067156 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:34Z","lastTransitionTime":"2025-12-11T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.169808 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.169868 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.169885 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.169908 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.169925 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:34Z","lastTransitionTime":"2025-12-11T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.214599 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.214635 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.214732 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:34 crc kubenswrapper[4877]: E1211 18:01:34.214728 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.214786 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:34 crc kubenswrapper[4877]: E1211 18:01:34.214855 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:34 crc kubenswrapper[4877]: E1211 18:01:34.214923 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:34 crc kubenswrapper[4877]: E1211 18:01:34.214968 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.272556 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.272628 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.272651 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.272681 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.272706 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:34Z","lastTransitionTime":"2025-12-11T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.375508 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.375560 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.375582 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.375611 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.375630 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:34Z","lastTransitionTime":"2025-12-11T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.478271 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.478327 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.478344 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.478367 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.478420 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:34Z","lastTransitionTime":"2025-12-11T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.581475 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.581537 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.581558 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.581585 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.581607 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:34Z","lastTransitionTime":"2025-12-11T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.684515 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.684570 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.684587 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.684609 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.684625 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:34Z","lastTransitionTime":"2025-12-11T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.787249 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.787410 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.787431 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.787454 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.787471 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:34Z","lastTransitionTime":"2025-12-11T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.889852 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.889898 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.889908 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.889923 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.889935 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:34Z","lastTransitionTime":"2025-12-11T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.992265 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.992334 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.992350 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.992406 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:34 crc kubenswrapper[4877]: I1211 18:01:34.992423 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:34Z","lastTransitionTime":"2025-12-11T18:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.094708 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.095115 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.095272 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.095460 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.095605 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:35Z","lastTransitionTime":"2025-12-11T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.197974 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.198038 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.198060 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.198087 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.198108 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:35Z","lastTransitionTime":"2025-12-11T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.300828 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.300902 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.300924 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.300954 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.300975 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:35Z","lastTransitionTime":"2025-12-11T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.403452 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.403512 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.403528 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.403551 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.403569 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:35Z","lastTransitionTime":"2025-12-11T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.507164 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.507244 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.507256 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.507294 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.507308 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:35Z","lastTransitionTime":"2025-12-11T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.610978 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.611090 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.611109 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.611168 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.611190 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:35Z","lastTransitionTime":"2025-12-11T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.714051 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.714183 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.714218 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.714248 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.714270 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:35Z","lastTransitionTime":"2025-12-11T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.817906 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.817960 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.817972 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.817990 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.818004 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:35Z","lastTransitionTime":"2025-12-11T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.920734 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.920795 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.920811 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.920833 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:35 crc kubenswrapper[4877]: I1211 18:01:35.920851 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:35Z","lastTransitionTime":"2025-12-11T18:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.023762 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.023817 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.023828 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.023847 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.023862 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:36Z","lastTransitionTime":"2025-12-11T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.126367 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.126423 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.126432 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.126447 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.126458 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:36Z","lastTransitionTime":"2025-12-11T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.214630 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.214683 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.214699 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:36 crc kubenswrapper[4877]: E1211 18:01:36.214764 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.214638 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:36 crc kubenswrapper[4877]: E1211 18:01:36.214922 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:36 crc kubenswrapper[4877]: E1211 18:01:36.214976 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:36 crc kubenswrapper[4877]: E1211 18:01:36.215035 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.228133 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.228155 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.228163 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.228176 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.228185 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:36Z","lastTransitionTime":"2025-12-11T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.331413 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.331468 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.331480 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.331493 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.331521 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:36Z","lastTransitionTime":"2025-12-11T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.434280 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.434315 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.434324 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.434338 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.434348 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:36Z","lastTransitionTime":"2025-12-11T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.537323 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.537368 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.537529 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.537550 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.537562 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:36Z","lastTransitionTime":"2025-12-11T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.640081 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.640137 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.640153 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.640176 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.640194 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:36Z","lastTransitionTime":"2025-12-11T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.743628 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.743675 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.743688 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.743705 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.743717 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:36Z","lastTransitionTime":"2025-12-11T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.846037 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.846074 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.846082 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.846097 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.846108 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:36Z","lastTransitionTime":"2025-12-11T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.947933 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.947970 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.947979 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.947992 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:36 crc kubenswrapper[4877]: I1211 18:01:36.948002 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:36Z","lastTransitionTime":"2025-12-11T18:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.050251 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.050298 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.050309 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.050324 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.050337 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:37Z","lastTransitionTime":"2025-12-11T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.152543 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.152587 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.152598 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.152614 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.152625 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:37Z","lastTransitionTime":"2025-12-11T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.255111 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.255144 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.255151 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.255168 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.255177 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:37Z","lastTransitionTime":"2025-12-11T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.357954 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.358027 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.358050 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.358085 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.358106 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:37Z","lastTransitionTime":"2025-12-11T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.460981 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.461051 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.461069 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.461092 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.461112 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:37Z","lastTransitionTime":"2025-12-11T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.563455 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.563508 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.563525 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.563547 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.563563 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:37Z","lastTransitionTime":"2025-12-11T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.666038 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.666106 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.666128 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.666155 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.666176 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:37Z","lastTransitionTime":"2025-12-11T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.768778 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.768825 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.768835 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.768852 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.768863 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:37Z","lastTransitionTime":"2025-12-11T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.871355 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.871409 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.871437 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.871452 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.871463 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:37Z","lastTransitionTime":"2025-12-11T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.973347 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.973459 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.973482 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.973509 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:37 crc kubenswrapper[4877]: I1211 18:01:37.973531 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:37Z","lastTransitionTime":"2025-12-11T18:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.075853 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.075923 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.075934 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.075948 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.075959 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:38Z","lastTransitionTime":"2025-12-11T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.178620 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.178656 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.178665 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.178680 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.178690 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:38Z","lastTransitionTime":"2025-12-11T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.215295 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.215346 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.215409 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.215476 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:38 crc kubenswrapper[4877]: E1211 18:01:38.215467 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:38 crc kubenswrapper[4877]: E1211 18:01:38.215583 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:38 crc kubenswrapper[4877]: E1211 18:01:38.215727 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:38 crc kubenswrapper[4877]: E1211 18:01:38.216091 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.280831 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.280882 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.280890 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.280906 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.280916 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:38Z","lastTransitionTime":"2025-12-11T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.383004 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.383033 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.383041 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.383057 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.383069 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:38Z","lastTransitionTime":"2025-12-11T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.485048 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.485092 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.485103 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.485118 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.485130 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:38Z","lastTransitionTime":"2025-12-11T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.591758 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.592262 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.592302 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.592332 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.592357 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:38Z","lastTransitionTime":"2025-12-11T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.695546 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.695614 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.695632 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.695654 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.695670 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:38Z","lastTransitionTime":"2025-12-11T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.798217 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.798305 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.798327 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.798355 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.798423 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:38Z","lastTransitionTime":"2025-12-11T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.900721 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.900763 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.900772 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.900789 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:38 crc kubenswrapper[4877]: I1211 18:01:38.900808 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:38Z","lastTransitionTime":"2025-12-11T18:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.003363 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.003446 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.003465 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.003488 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.003505 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:39Z","lastTransitionTime":"2025-12-11T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.107361 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.107439 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.107449 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.107466 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.107479 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:39Z","lastTransitionTime":"2025-12-11T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.209950 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.210699 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.210722 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.210761 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.210785 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:39Z","lastTransitionTime":"2025-12-11T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.233915 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.260859 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:27Z\\\",\\\"message\\\":\\\"services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nI1211 18:01:27.238624 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 18:01:27.238650 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc ann\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.277596 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.300566 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.312491 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.312517 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.312527 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.312542 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.312553 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:39Z","lastTransitionTime":"2025-12-11T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.320614 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.337113 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27330020-559b-40cb-902a-044e06dee073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f148f68607cf145de77583ba5960130ecdf97055beee224305d3de94958c3353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dec39053fa9ed82953821ecdff4313b3856ec32e4772398c37d7ebcf1833d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4526853d8e97d7baaf38c3301e8068e57a196e912cefc5db6b261bf79087d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.356880 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.372965 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.388261 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.406676 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.414254 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.414276 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.414284 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.414299 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.414308 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:39Z","lastTransitionTime":"2025-12-11T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.418095 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.432897 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.448539 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.460168 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.473037 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.491955 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.509620 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.516449 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.516537 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.516560 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.516589 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.516610 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:39Z","lastTransitionTime":"2025-12-11T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.521957 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:39Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.619467 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.619539 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.619578 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.619606 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.619626 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:39Z","lastTransitionTime":"2025-12-11T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.722636 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.722696 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.722716 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.722740 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.722758 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:39Z","lastTransitionTime":"2025-12-11T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.824994 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.825044 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.825060 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.825083 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.825100 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:39Z","lastTransitionTime":"2025-12-11T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.927209 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.927259 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.927276 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.927299 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:39 crc kubenswrapper[4877]: I1211 18:01:39.927317 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:39Z","lastTransitionTime":"2025-12-11T18:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.030548 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.030588 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.030597 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.030611 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.030622 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:40Z","lastTransitionTime":"2025-12-11T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.132768 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.132809 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.132819 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.132837 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.132846 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:40Z","lastTransitionTime":"2025-12-11T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.215230 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.215303 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.215253 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:40 crc kubenswrapper[4877]: E1211 18:01:40.215477 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:40 crc kubenswrapper[4877]: E1211 18:01:40.215636 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:40 crc kubenswrapper[4877]: E1211 18:01:40.215815 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.216162 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:40 crc kubenswrapper[4877]: E1211 18:01:40.217583 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.235892 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.235970 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.236003 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.236189 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.236219 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:40Z","lastTransitionTime":"2025-12-11T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.338540 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.338590 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.338600 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.338617 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.338629 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:40Z","lastTransitionTime":"2025-12-11T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.441491 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.441545 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.441560 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.441580 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.441594 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:40Z","lastTransitionTime":"2025-12-11T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.544461 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.544499 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.544507 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.544520 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.544529 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:40Z","lastTransitionTime":"2025-12-11T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.646462 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.646518 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.646528 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.646544 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.646569 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:40Z","lastTransitionTime":"2025-12-11T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.749218 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.749285 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.749303 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.749396 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.749437 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:40Z","lastTransitionTime":"2025-12-11T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.852742 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.852794 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.852809 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.852828 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.852841 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:40Z","lastTransitionTime":"2025-12-11T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.955914 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.955960 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.955976 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.956001 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:40 crc kubenswrapper[4877]: I1211 18:01:40.956017 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:40Z","lastTransitionTime":"2025-12-11T18:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.058689 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.058757 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.058770 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.058791 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.058804 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:41Z","lastTransitionTime":"2025-12-11T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.161979 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.162028 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.162037 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.162053 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.162065 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:41Z","lastTransitionTime":"2025-12-11T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.264919 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.264977 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.264995 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.265018 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.265036 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:41Z","lastTransitionTime":"2025-12-11T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.368593 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.368637 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.368652 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.368673 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.368689 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:41Z","lastTransitionTime":"2025-12-11T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.472264 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.472313 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.472328 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.472349 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.472367 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:41Z","lastTransitionTime":"2025-12-11T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.574995 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.575028 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.575038 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.575052 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.575062 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:41Z","lastTransitionTime":"2025-12-11T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.677599 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.677652 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.677668 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.677690 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.677707 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:41Z","lastTransitionTime":"2025-12-11T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.780023 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.780068 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.780085 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.780107 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.780123 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:41Z","lastTransitionTime":"2025-12-11T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.883809 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.883990 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.884304 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.884683 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.884750 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:41Z","lastTransitionTime":"2025-12-11T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.987068 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.987101 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.987109 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.987123 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:41 crc kubenswrapper[4877]: I1211 18:01:41.987132 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:41Z","lastTransitionTime":"2025-12-11T18:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.090567 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.090607 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.090618 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.090632 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.090641 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:42Z","lastTransitionTime":"2025-12-11T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.193879 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.193949 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.193965 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.193989 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.194006 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:42Z","lastTransitionTime":"2025-12-11T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.215024 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.215101 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:42 crc kubenswrapper[4877]: E1211 18:01:42.215148 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.215040 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:42 crc kubenswrapper[4877]: E1211 18:01:42.215263 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.215292 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:42 crc kubenswrapper[4877]: E1211 18:01:42.215484 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:42 crc kubenswrapper[4877]: E1211 18:01:42.215819 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.297236 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.297271 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.297283 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.297298 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.297308 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:42Z","lastTransitionTime":"2025-12-11T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.400048 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.400094 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.400109 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.400440 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.400471 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:42Z","lastTransitionTime":"2025-12-11T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.503471 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.503518 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.503538 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.503561 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.503577 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:42Z","lastTransitionTime":"2025-12-11T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.606997 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.607072 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.607099 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.607143 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.607165 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:42Z","lastTransitionTime":"2025-12-11T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.709042 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.709076 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.709086 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.709104 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.709117 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:42Z","lastTransitionTime":"2025-12-11T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.811192 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.811231 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.811241 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.811258 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.811269 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:42Z","lastTransitionTime":"2025-12-11T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.913563 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.913615 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.913631 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.913653 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:42 crc kubenswrapper[4877]: I1211 18:01:42.913670 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:42Z","lastTransitionTime":"2025-12-11T18:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.016121 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.016194 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.016217 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.016245 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.016267 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.118832 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.118873 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.118882 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.118896 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.118906 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.188353 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.188412 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.188424 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.188438 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.188450 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: E1211 18:01:43.207393 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:43Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.211957 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.211991 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.212000 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.212015 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.212027 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: E1211 18:01:43.226165 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:43Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.229709 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.229756 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.229768 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.229785 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.229795 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: E1211 18:01:43.240514 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:43Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.243587 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.243629 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.243639 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.243653 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.243665 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: E1211 18:01:43.255260 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:43Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.258032 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.258059 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.258069 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.258084 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.258096 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: E1211 18:01:43.268119 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:43Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:43 crc kubenswrapper[4877]: E1211 18:01:43.268261 4877 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.269319 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.269343 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.269353 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.269365 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.269396 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.370963 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.371000 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.371009 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.371025 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.371035 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.473454 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.473494 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.473504 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.473520 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.473533 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.576016 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.576078 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.576090 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.576105 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.576119 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.678715 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.678770 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.678781 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.678794 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.678803 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.781471 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.781533 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.781549 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.781574 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.781590 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.884323 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.884404 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.884418 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.884438 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.884449 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.986961 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.987001 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.987010 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.987025 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:43 crc kubenswrapper[4877]: I1211 18:01:43.987037 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:43Z","lastTransitionTime":"2025-12-11T18:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.089781 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.089820 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.089830 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.089848 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.089861 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:44Z","lastTransitionTime":"2025-12-11T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.193019 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.193083 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.193098 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.193118 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.193131 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:44Z","lastTransitionTime":"2025-12-11T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.214683 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.214719 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.214739 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.214732 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:44 crc kubenswrapper[4877]: E1211 18:01:44.214820 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:44 crc kubenswrapper[4877]: E1211 18:01:44.215201 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:44 crc kubenswrapper[4877]: E1211 18:01:44.215278 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:44 crc kubenswrapper[4877]: E1211 18:01:44.215369 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.215468 4877 scope.go:117] "RemoveContainer" containerID="42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816" Dec 11 18:01:44 crc kubenswrapper[4877]: E1211 18:01:44.215616 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.295679 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.295731 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.295741 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.295758 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.295768 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:44Z","lastTransitionTime":"2025-12-11T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.397567 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.397612 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.397622 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.397641 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.397653 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:44Z","lastTransitionTime":"2025-12-11T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.501279 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.501312 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.501321 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.501332 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.501343 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:44Z","lastTransitionTime":"2025-12-11T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.603498 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.603532 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.603541 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.603554 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.603563 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:44Z","lastTransitionTime":"2025-12-11T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.706515 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.706549 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.706556 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.706569 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.706580 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:44Z","lastTransitionTime":"2025-12-11T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.809425 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.809461 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.809471 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.809484 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.809494 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:44Z","lastTransitionTime":"2025-12-11T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.912463 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.912502 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.912514 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.912533 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:44 crc kubenswrapper[4877]: I1211 18:01:44.912545 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:44Z","lastTransitionTime":"2025-12-11T18:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.014946 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.014980 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.014989 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.015004 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.015016 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:45Z","lastTransitionTime":"2025-12-11T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.117657 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.117686 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.117694 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.117710 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.117719 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:45Z","lastTransitionTime":"2025-12-11T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.219907 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.219942 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.219951 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.219964 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.219972 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:45Z","lastTransitionTime":"2025-12-11T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.322203 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.322237 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.322246 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.322260 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.322270 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:45Z","lastTransitionTime":"2025-12-11T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.424606 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.424644 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.424655 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.424670 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.424682 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:45Z","lastTransitionTime":"2025-12-11T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.526985 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.527021 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.527031 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.527046 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.527056 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:45Z","lastTransitionTime":"2025-12-11T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.635334 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.635367 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.635381 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.635409 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.635420 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:45Z","lastTransitionTime":"2025-12-11T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.738045 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.738088 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.738098 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.738112 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.738122 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:45Z","lastTransitionTime":"2025-12-11T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.840951 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.841009 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.841021 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.841038 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.841051 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:45Z","lastTransitionTime":"2025-12-11T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.944127 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.944169 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.944177 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.944191 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:45 crc kubenswrapper[4877]: I1211 18:01:45.944202 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:45Z","lastTransitionTime":"2025-12-11T18:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.046784 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.046898 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.046918 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.046951 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.046971 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:46Z","lastTransitionTime":"2025-12-11T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.149825 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.149968 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.149980 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.150002 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.150013 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:46Z","lastTransitionTime":"2025-12-11T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.224491 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.224492 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.224498 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.224527 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:46 crc kubenswrapper[4877]: E1211 18:01:46.225139 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:46 crc kubenswrapper[4877]: E1211 18:01:46.225374 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:46 crc kubenswrapper[4877]: E1211 18:01:46.225565 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:46 crc kubenswrapper[4877]: E1211 18:01:46.225781 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.239618 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.261910 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.261968 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.261979 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.262003 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.262016 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:46Z","lastTransitionTime":"2025-12-11T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.364584 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.364644 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.364658 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.364686 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.364701 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:46Z","lastTransitionTime":"2025-12-11T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.468535 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.468612 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.468623 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.468643 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.468655 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:46Z","lastTransitionTime":"2025-12-11T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.498324 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs\") pod \"network-metrics-daemon-sn9xv\" (UID: \"fa0b7b99-8d0a-48ad-9f98-da5947644472\") " pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:46 crc kubenswrapper[4877]: E1211 18:01:46.498856 4877 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:01:46 crc kubenswrapper[4877]: E1211 18:01:46.498999 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs podName:fa0b7b99-8d0a-48ad-9f98-da5947644472 nodeName:}" failed. No retries permitted until 2025-12-11 18:02:18.498969157 +0000 UTC m=+99.525213211 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs") pod "network-metrics-daemon-sn9xv" (UID: "fa0b7b99-8d0a-48ad-9f98-da5947644472") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.572120 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.572187 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.572201 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.572225 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.572240 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:46Z","lastTransitionTime":"2025-12-11T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.674980 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.675074 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.675118 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.675143 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.675158 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:46Z","lastTransitionTime":"2025-12-11T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.778514 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.778567 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.778579 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.778603 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.778617 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:46Z","lastTransitionTime":"2025-12-11T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.882275 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.882326 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.882340 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.882358 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.882374 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:46Z","lastTransitionTime":"2025-12-11T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.984608 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.984658 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.984669 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.984689 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:46 crc kubenswrapper[4877]: I1211 18:01:46.984702 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:46Z","lastTransitionTime":"2025-12-11T18:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.088477 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.088529 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.088540 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.088562 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.088575 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:47Z","lastTransitionTime":"2025-12-11T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.191404 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.191473 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.191487 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.191512 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.191526 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:47Z","lastTransitionTime":"2025-12-11T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.295640 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.295707 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.295724 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.295752 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.295774 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:47Z","lastTransitionTime":"2025-12-11T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.401062 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.401132 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.401155 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.401184 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.401203 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:47Z","lastTransitionTime":"2025-12-11T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.504628 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.504697 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.504716 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.504744 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.504768 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:47Z","lastTransitionTime":"2025-12-11T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.608208 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.608264 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.608276 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.608296 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.608334 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:47Z","lastTransitionTime":"2025-12-11T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.655524 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwfnt_61afe7d0-ec5b-41aa-a8fb-6628b863a59c/kube-multus/0.log" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.655582 4877 generic.go:334] "Generic (PLEG): container finished" podID="61afe7d0-ec5b-41aa-a8fb-6628b863a59c" containerID="e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064" exitCode=1 Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.655614 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwfnt" event={"ID":"61afe7d0-ec5b-41aa-a8fb-6628b863a59c","Type":"ContainerDied","Data":"e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064"} Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.656008 4877 scope.go:117] "RemoveContainer" containerID="e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.676886 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.692560 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.712634 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.712674 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.712684 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.712700 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.712711 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:47Z","lastTransitionTime":"2025-12-11T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.712964 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:46Z\\\",\\\"message\\\":\\\"2025-12-11T18:01:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a0c8272a-1bfc-4bbd-a001-df68d0374f74\\\\n2025-12-11T18:01:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a0c8272a-1bfc-4bbd-a001-df68d0374f74 to /host/opt/cni/bin/\\\\n2025-12-11T18:01:01Z [verbose] multus-daemon started\\\\n2025-12-11T18:01:01Z [verbose] Readiness Indicator file check\\\\n2025-12-11T18:01:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.733723 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:27Z\\\",\\\"message\\\":\\\"services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nI1211 18:01:27.238624 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 18:01:27.238650 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc ann\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.747381 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.769305 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.783994 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.799537 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27330020-559b-40cb-902a-044e06dee073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f148f68607cf145de77583ba5960130ecdf97055beee224305d3de94958c3353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dec39053fa9ed82953821ecdff4313b3856ec32e4772398c37d7ebcf1833d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4526853d8e97d7baaf38c3301e8068e57a196e912cefc5db6b261bf79087d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.816357 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.816418 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.816311 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.816430 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.816658 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.816683 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:47Z","lastTransitionTime":"2025-12-11T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.829696 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.842457 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.855684 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.876181 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.894744 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b27506f1-6fe1-4b8f-8ec3-12b4ce0bb4a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7526cf2fa868a4216d213721f626e28ba5a54313d3b602d7f51f83821336bfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.919560 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.919607 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.919619 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.919635 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.919645 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:47Z","lastTransitionTime":"2025-12-11T18:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.922700 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.948763 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.961891 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.976502 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:47 crc kubenswrapper[4877]: I1211 18:01:47.987681 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:47Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.022839 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.022879 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.022892 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.022912 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.022925 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:48Z","lastTransitionTime":"2025-12-11T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.125296 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.125333 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.125345 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.125365 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.125413 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:48Z","lastTransitionTime":"2025-12-11T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.215130 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.215130 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:48 crc kubenswrapper[4877]: E1211 18:01:48.215281 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.215154 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:48 crc kubenswrapper[4877]: E1211 18:01:48.215452 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.215212 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:48 crc kubenswrapper[4877]: E1211 18:01:48.215543 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:48 crc kubenswrapper[4877]: E1211 18:01:48.215586 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.227949 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.227980 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.227989 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.228001 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.228011 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:48Z","lastTransitionTime":"2025-12-11T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.330484 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.330512 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.330524 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.330537 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.330548 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:48Z","lastTransitionTime":"2025-12-11T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.432494 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.432548 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.432562 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.432580 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.432592 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:48Z","lastTransitionTime":"2025-12-11T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.535207 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.535250 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.535260 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.535274 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.535283 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:48Z","lastTransitionTime":"2025-12-11T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.637840 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.637891 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.637907 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.637924 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.637936 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:48Z","lastTransitionTime":"2025-12-11T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.660783 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwfnt_61afe7d0-ec5b-41aa-a8fb-6628b863a59c/kube-multus/0.log" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.660840 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwfnt" event={"ID":"61afe7d0-ec5b-41aa-a8fb-6628b863a59c","Type":"ContainerStarted","Data":"9a9c929cdad0d9629dbf6fc2c1fbd5a05440834612e6acd541b19af37d0378f6"} Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.681162 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.694431 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27330020-559b-40cb-902a-044e06dee073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f148f68607cf145de77583ba5960130ecdf97055beee224305d3de94958c3353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dec39053fa9ed82953821ecdff4313b3856ec32e4772398c37d7ebcf1833d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4526853d8e97d7baaf38c3301e8068e57a196e912cefc5db6b261bf79087d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.710673 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.733137 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.740830 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.740881 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.740892 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.740913 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.740924 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:48Z","lastTransitionTime":"2025-12-11T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.755219 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a9c929cdad0d9629dbf6fc2c1fbd5a05440834612e6acd541b19af37d0378f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:46Z\\\",\\\"message\\\":\\\"2025-12-11T18:01:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a0c8272a-1bfc-4bbd-a001-df68d0374f74\\\\n2025-12-11T18:01:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a0c8272a-1bfc-4bbd-a001-df68d0374f74 to /host/opt/cni/bin/\\\\n2025-12-11T18:01:01Z [verbose] multus-daemon started\\\\n2025-12-11T18:01:01Z [verbose] Readiness Indicator file check\\\\n2025-12-11T18:01:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.785703 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:27Z\\\",\\\"message\\\":\\\"services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nI1211 18:01:27.238624 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 18:01:27.238650 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc ann\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.798527 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.821841 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.834326 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.844721 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.844924 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.845020 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.845124 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.845223 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:48Z","lastTransitionTime":"2025-12-11T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.847270 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.867544 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.881105 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.895323 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.907506 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.918942 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.930215 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b27506f1-6fe1-4b8f-8ec3-12b4ce0bb4a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7526cf2fa868a4216d213721f626e28ba5a54313d3b602d7f51f83821336bfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.943549 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.947697 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.947737 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.947751 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.947768 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.947781 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:48Z","lastTransitionTime":"2025-12-11T18:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.953768 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:48 crc kubenswrapper[4877]: I1211 18:01:48.967443 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:48Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.050717 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.051037 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.051126 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.051206 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.051281 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:49Z","lastTransitionTime":"2025-12-11T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.153632 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.153959 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.154051 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.154143 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.154223 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:49Z","lastTransitionTime":"2025-12-11T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.234743 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.246891 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.257551 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.257600 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.257755 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.257797 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.257811 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:49Z","lastTransitionTime":"2025-12-11T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.259357 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a9c929cdad0d9629dbf6fc2c1fbd5a05440834612e6acd541b19af37d0378f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:46Z\\\",\\\"message\\\":\\\"2025-12-11T18:01:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a0c8272a-1bfc-4bbd-a001-df68d0374f74\\\\n2025-12-11T18:01:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a0c8272a-1bfc-4bbd-a001-df68d0374f74 to /host/opt/cni/bin/\\\\n2025-12-11T18:01:01Z [verbose] multus-daemon started\\\\n2025-12-11T18:01:01Z [verbose] Readiness Indicator file check\\\\n2025-12-11T18:01:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.282172 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:27Z\\\",\\\"message\\\":\\\"services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nI1211 18:01:27.238624 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 18:01:27.238650 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc ann\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.297228 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.326291 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.346378 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.362139 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27330020-559b-40cb-902a-044e06dee073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f148f68607cf145de77583ba5960130ecdf97055beee224305d3de94958c3353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dec39053fa9ed82953821ecdff4313b3856ec32e4772398c37d7ebcf1833d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4526853d8e97d7baaf38c3301e8068e57a196e912cefc5db6b261bf79087d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.362894 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.362927 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.362936 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.362959 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.362971 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:49Z","lastTransitionTime":"2025-12-11T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.378475 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.395194 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.410164 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.424559 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.457315 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.465822 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.465882 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.465897 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.465920 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.465938 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:49Z","lastTransitionTime":"2025-12-11T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.474880 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b27506f1-6fe1-4b8f-8ec3-12b4ce0bb4a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7526cf2fa868a4216d213721f626e28ba5a54313d3b602d7f51f83821336bfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.489234 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.503871 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.517804 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.533947 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.548795 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:49Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.569911 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.569961 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.569972 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.569991 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.570001 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:49Z","lastTransitionTime":"2025-12-11T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.671938 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.672009 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.672020 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.672040 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.672054 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:49Z","lastTransitionTime":"2025-12-11T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.774747 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.774798 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.774810 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.774832 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.774846 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:49Z","lastTransitionTime":"2025-12-11T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.878357 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.878461 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.878472 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.878490 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.878503 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:49Z","lastTransitionTime":"2025-12-11T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.981655 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.981715 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.981727 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.981750 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:49 crc kubenswrapper[4877]: I1211 18:01:49.981764 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:49Z","lastTransitionTime":"2025-12-11T18:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.085060 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.085106 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.085117 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.085137 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.085151 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:50Z","lastTransitionTime":"2025-12-11T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.188007 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.188085 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.188106 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.188136 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.188156 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:50Z","lastTransitionTime":"2025-12-11T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.214679 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.214738 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.214706 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.214695 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:50 crc kubenswrapper[4877]: E1211 18:01:50.214884 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:50 crc kubenswrapper[4877]: E1211 18:01:50.215074 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:50 crc kubenswrapper[4877]: E1211 18:01:50.215135 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:50 crc kubenswrapper[4877]: E1211 18:01:50.215244 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.291482 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.291534 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.291547 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.291568 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.291582 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:50Z","lastTransitionTime":"2025-12-11T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.395889 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.395954 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.395971 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.395996 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.396016 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:50Z","lastTransitionTime":"2025-12-11T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.500161 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.500229 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.500245 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.500269 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.500282 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:50Z","lastTransitionTime":"2025-12-11T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.603455 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.603547 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.603567 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.603600 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.603619 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:50Z","lastTransitionTime":"2025-12-11T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.706191 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.706229 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.706239 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.706254 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.706267 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:50Z","lastTransitionTime":"2025-12-11T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.808876 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.808931 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.808945 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.808964 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.808977 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:50Z","lastTransitionTime":"2025-12-11T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.916912 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.916960 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.916974 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.916997 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:50 crc kubenswrapper[4877]: I1211 18:01:50.917007 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:50Z","lastTransitionTime":"2025-12-11T18:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.019499 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.019536 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.019546 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.019565 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.019574 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:51Z","lastTransitionTime":"2025-12-11T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.122351 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.122402 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.122413 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.122434 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.122452 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:51Z","lastTransitionTime":"2025-12-11T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.224400 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.224449 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.224461 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.224478 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.224489 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:51Z","lastTransitionTime":"2025-12-11T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.328300 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.328346 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.328356 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.328392 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.328403 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:51Z","lastTransitionTime":"2025-12-11T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.432159 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.432217 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.432231 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.432251 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.432264 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:51Z","lastTransitionTime":"2025-12-11T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.534409 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.534449 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.534459 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.534476 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.534487 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:51Z","lastTransitionTime":"2025-12-11T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.637115 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.637168 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.637181 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.637203 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.637217 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:51Z","lastTransitionTime":"2025-12-11T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.739703 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.739778 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.739791 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.739815 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.739831 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:51Z","lastTransitionTime":"2025-12-11T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.842580 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.843191 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.843208 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.843231 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.843246 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:51Z","lastTransitionTime":"2025-12-11T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.946122 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.946182 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.946197 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.946218 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:51 crc kubenswrapper[4877]: I1211 18:01:51.946232 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:51Z","lastTransitionTime":"2025-12-11T18:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.049562 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.050115 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.050316 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.050572 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.050798 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:52Z","lastTransitionTime":"2025-12-11T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.153653 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.153693 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.153701 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.153718 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.153728 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:52Z","lastTransitionTime":"2025-12-11T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.214751 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.214811 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.214892 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.215333 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:52 crc kubenswrapper[4877]: E1211 18:01:52.215532 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:52 crc kubenswrapper[4877]: E1211 18:01:52.215715 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:52 crc kubenswrapper[4877]: E1211 18:01:52.215895 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:52 crc kubenswrapper[4877]: E1211 18:01:52.216033 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.262279 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.262337 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.262350 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.262396 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.262411 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:52Z","lastTransitionTime":"2025-12-11T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.365216 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.365271 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.365283 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.365301 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.365315 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:52Z","lastTransitionTime":"2025-12-11T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.468884 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.468932 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.468944 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.468964 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.468978 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:52Z","lastTransitionTime":"2025-12-11T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.571996 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.572056 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.572068 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.572091 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.572107 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:52Z","lastTransitionTime":"2025-12-11T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.674016 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.674083 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.674099 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.674125 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.674143 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:52Z","lastTransitionTime":"2025-12-11T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.777705 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.778490 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.778522 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.778543 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.778555 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:52Z","lastTransitionTime":"2025-12-11T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.881024 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.881076 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.881086 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.881104 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.881118 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:52Z","lastTransitionTime":"2025-12-11T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.985096 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.985145 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.985154 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.985174 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:52 crc kubenswrapper[4877]: I1211 18:01:52.985189 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:52Z","lastTransitionTime":"2025-12-11T18:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.088056 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.088110 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.088126 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.088145 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.088157 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:53Z","lastTransitionTime":"2025-12-11T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.190566 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.190624 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.190635 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.190653 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.190665 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:53Z","lastTransitionTime":"2025-12-11T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.292971 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.293026 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.293037 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.293056 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.293065 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:53Z","lastTransitionTime":"2025-12-11T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.395343 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.395422 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.395433 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.395457 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.395473 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:53Z","lastTransitionTime":"2025-12-11T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.498841 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.498889 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.498907 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.498930 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.498946 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:53Z","lastTransitionTime":"2025-12-11T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.580167 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.580253 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.580272 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.580308 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.580332 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:53Z","lastTransitionTime":"2025-12-11T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:53 crc kubenswrapper[4877]: E1211 18:01:53.608768 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:53Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.613267 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.613312 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.613329 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.613354 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.613375 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:53Z","lastTransitionTime":"2025-12-11T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:53 crc kubenswrapper[4877]: E1211 18:01:53.633544 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:53Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.639203 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.639262 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.639289 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.639324 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.639347 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:53Z","lastTransitionTime":"2025-12-11T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:53 crc kubenswrapper[4877]: E1211 18:01:53.659014 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:53Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.663584 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.663673 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.663700 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.663752 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.663782 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:53Z","lastTransitionTime":"2025-12-11T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:53 crc kubenswrapper[4877]: E1211 18:01:53.686561 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:53Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.691849 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.691901 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.691916 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.691941 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.691965 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:53Z","lastTransitionTime":"2025-12-11T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:53 crc kubenswrapper[4877]: E1211 18:01:53.712156 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:53Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:53 crc kubenswrapper[4877]: E1211 18:01:53.712341 4877 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.714551 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.714591 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.714604 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.714621 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.714633 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:53Z","lastTransitionTime":"2025-12-11T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.819960 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.820012 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.820027 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.820047 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.820056 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:53Z","lastTransitionTime":"2025-12-11T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.923875 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.923944 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.923963 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.923989 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:53 crc kubenswrapper[4877]: I1211 18:01:53.924009 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:53Z","lastTransitionTime":"2025-12-11T18:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.027785 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.027846 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.027860 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.027883 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.027898 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:54Z","lastTransitionTime":"2025-12-11T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.131193 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.131245 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.131255 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.131273 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.131282 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:54Z","lastTransitionTime":"2025-12-11T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.214414 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.214517 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:54 crc kubenswrapper[4877]: E1211 18:01:54.214601 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.214517 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.214517 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:54 crc kubenswrapper[4877]: E1211 18:01:54.214773 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:54 crc kubenswrapper[4877]: E1211 18:01:54.214895 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:54 crc kubenswrapper[4877]: E1211 18:01:54.215036 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.233909 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.233975 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.234002 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.234034 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.234054 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:54Z","lastTransitionTime":"2025-12-11T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.337681 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.337718 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.337727 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.337744 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.337753 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:54Z","lastTransitionTime":"2025-12-11T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.440779 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.440836 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.440849 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.440871 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.440883 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:54Z","lastTransitionTime":"2025-12-11T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.544646 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.544702 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.544711 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.544736 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.544747 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:54Z","lastTransitionTime":"2025-12-11T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.647713 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.647770 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.647786 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.647809 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.647824 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:54Z","lastTransitionTime":"2025-12-11T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.751195 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.751257 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.751274 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.751299 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.751316 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:54Z","lastTransitionTime":"2025-12-11T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.854940 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.855016 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.855036 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.855100 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.855122 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:54Z","lastTransitionTime":"2025-12-11T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.958959 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.959053 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.959069 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.959096 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:54 crc kubenswrapper[4877]: I1211 18:01:54.959113 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:54Z","lastTransitionTime":"2025-12-11T18:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.062163 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.062256 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.062269 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.062288 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.062299 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:55Z","lastTransitionTime":"2025-12-11T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.165203 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.165258 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.165271 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.165293 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.165309 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:55Z","lastTransitionTime":"2025-12-11T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.215538 4877 scope.go:117] "RemoveContainer" containerID="42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.267870 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.267930 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.267940 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.267965 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.267978 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:55Z","lastTransitionTime":"2025-12-11T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.372318 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.372409 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.372498 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.372530 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.372550 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:55Z","lastTransitionTime":"2025-12-11T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.475177 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.475258 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.475291 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.475323 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.475345 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:55Z","lastTransitionTime":"2025-12-11T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.578734 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.578791 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.578806 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.578829 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.578842 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:55Z","lastTransitionTime":"2025-12-11T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.682224 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.682290 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.682308 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.682334 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.682348 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:55Z","lastTransitionTime":"2025-12-11T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.785538 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.785609 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.785636 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.785674 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.785714 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:55Z","lastTransitionTime":"2025-12-11T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.888676 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.888740 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.888760 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.888790 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.888810 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:55Z","lastTransitionTime":"2025-12-11T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.995162 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.995224 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.995240 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.995288 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:55 crc kubenswrapper[4877]: I1211 18:01:55.995512 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:55Z","lastTransitionTime":"2025-12-11T18:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.099117 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.099155 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.099166 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.099182 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.099192 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:56Z","lastTransitionTime":"2025-12-11T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.201324 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.201390 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.201403 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.201419 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.201430 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:56Z","lastTransitionTime":"2025-12-11T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.214307 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.214337 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:56 crc kubenswrapper[4877]: E1211 18:01:56.214487 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.214519 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.214551 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:56 crc kubenswrapper[4877]: E1211 18:01:56.214611 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:56 crc kubenswrapper[4877]: E1211 18:01:56.214723 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:56 crc kubenswrapper[4877]: E1211 18:01:56.214782 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.303485 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.303517 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.303526 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.303542 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.303553 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:56Z","lastTransitionTime":"2025-12-11T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.405929 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.405969 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.405983 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.406002 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.406014 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:56Z","lastTransitionTime":"2025-12-11T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.509232 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.509289 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.509303 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.509323 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.509335 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:56Z","lastTransitionTime":"2025-12-11T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.611542 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.611604 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.611621 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.611646 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.611661 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:56Z","lastTransitionTime":"2025-12-11T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.692442 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/2.log" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.696118 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerStarted","Data":"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c"} Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.696697 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.712065 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.713642 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.713679 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.713688 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.713708 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.713721 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:56Z","lastTransitionTime":"2025-12-11T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.723870 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.735146 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.749709 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.766056 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a9c929cdad0d9629dbf6fc2c1fbd5a05440834612e6acd541b19af37d0378f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:46Z\\\",\\\"message\\\":\\\"2025-12-11T18:01:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a0c8272a-1bfc-4bbd-a001-df68d0374f74\\\\n2025-12-11T18:01:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a0c8272a-1bfc-4bbd-a001-df68d0374f74 to /host/opt/cni/bin/\\\\n2025-12-11T18:01:01Z [verbose] multus-daemon started\\\\n2025-12-11T18:01:01Z [verbose] Readiness Indicator file check\\\\n2025-12-11T18:01:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.783746 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:27Z\\\",\\\"message\\\":\\\"services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nI1211 18:01:27.238624 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 18:01:27.238650 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc ann\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.798562 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.816535 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.816594 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.816611 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.816635 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.816651 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:56Z","lastTransitionTime":"2025-12-11T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.819617 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.835487 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.848467 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27330020-559b-40cb-902a-044e06dee073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f148f68607cf145de77583ba5960130ecdf97055beee224305d3de94958c3353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dec39053fa9ed82953821ecdff4313b3856ec32e4772398c37d7ebcf1833d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4526853d8e97d7baaf38c3301e8068e57a196e912cefc5db6b261bf79087d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.868629 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.885231 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.898468 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.913765 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.918641 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.918691 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.918705 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.918725 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.918739 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:56Z","lastTransitionTime":"2025-12-11T18:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.928482 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.947725 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.961300 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b27506f1-6fe1-4b8f-8ec3-12b4ce0bb4a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7526cf2fa868a4216d213721f626e28ba5a54313d3b602d7f51f83821336bfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.972739 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:56 crc kubenswrapper[4877]: I1211 18:01:56.985648 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.020928 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.020973 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.020981 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.020999 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.021010 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:57Z","lastTransitionTime":"2025-12-11T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.124267 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.124318 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.124327 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.124346 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.124357 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:57Z","lastTransitionTime":"2025-12-11T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.227082 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.227132 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.227143 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.227163 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.227175 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:57Z","lastTransitionTime":"2025-12-11T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.330145 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.330188 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.330197 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.330217 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.330228 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:57Z","lastTransitionTime":"2025-12-11T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.434150 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.434228 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.434245 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.434273 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.434296 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:57Z","lastTransitionTime":"2025-12-11T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.537123 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.537230 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.537253 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.537284 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.537304 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:57Z","lastTransitionTime":"2025-12-11T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.640611 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.641109 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.641521 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.641757 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.642095 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:57Z","lastTransitionTime":"2025-12-11T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.701360 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/3.log" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.702526 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/2.log" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.705588 4877 generic.go:334] "Generic (PLEG): container finished" podID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerID="d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c" exitCode=1 Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.705720 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerDied","Data":"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c"} Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.705831 4877 scope.go:117] "RemoveContainer" containerID="42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.706503 4877 scope.go:117] "RemoveContainer" containerID="d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c" Dec 11 18:01:57 crc kubenswrapper[4877]: E1211 18:01:57.706685 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.730085 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.746412 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.746875 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.747006 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.747171 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.747324 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:57Z","lastTransitionTime":"2025-12-11T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.750009 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.767503 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.787819 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.802882 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.814096 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b27506f1-6fe1-4b8f-8ec3-12b4ce0bb4a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7526cf2fa868a4216d213721f626e28ba5a54313d3b602d7f51f83821336bfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.825918 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.836475 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.849293 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.851750 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.851789 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.851798 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.851817 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.851828 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:57Z","lastTransitionTime":"2025-12-11T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.861559 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.874393 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.897364 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.914236 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.928124 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27330020-559b-40cb-902a-044e06dee073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f148f68607cf145de77583ba5960130ecdf97055beee224305d3de94958c3353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dec39053fa9ed82953821ecdff4313b3856ec32e4772398c37d7ebcf1833d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4526853d8e97d7baaf38c3301e8068e57a196e912cefc5db6b261bf79087d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.944017 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.954081 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.954128 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.954142 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.954162 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.954173 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:57Z","lastTransitionTime":"2025-12-11T18:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.958284 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.971032 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a9c929cdad0d9629dbf6fc2c1fbd5a05440834612e6acd541b19af37d0378f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:46Z\\\",\\\"message\\\":\\\"2025-12-11T18:01:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a0c8272a-1bfc-4bbd-a001-df68d0374f74\\\\n2025-12-11T18:01:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a0c8272a-1bfc-4bbd-a001-df68d0374f74 to /host/opt/cni/bin/\\\\n2025-12-11T18:01:01Z [verbose] multus-daemon started\\\\n2025-12-11T18:01:01Z [verbose] Readiness Indicator file check\\\\n2025-12-11T18:01:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:57 crc kubenswrapper[4877]: I1211 18:01:57.993696 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b90aa20643ab4ce26308c78c6d7cac871c7e07837f4a2ce582557217ceb816\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:27Z\\\",\\\"message\\\":\\\"services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}}\\\\nI1211 18:01:27.238624 6489 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.16:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3f1b9878-e751-4e46-a226-ce007d2c4aa7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 18:01:27.238650 6489 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc ann\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:56Z\\\",\\\"message\\\":\\\":false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.93:5000:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {83c1e277-3d22-42ae-a355-f7a0ff0bd171}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 18:01:56.623724 6847 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z]\\\\nI1211 18:01:56.623726 6847 admin_network_policy_namespace.go:56] Finished syncing Namespace kube-node-lease Admin Network Policy controller: took 8.72µs\\\\nI1211 18:01:56.623738 6847 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-vers\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:57Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.007500 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.057307 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.057362 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.057437 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.057488 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.057503 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:58Z","lastTransitionTime":"2025-12-11T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.160511 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.160556 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.160565 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.160582 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.160595 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:58Z","lastTransitionTime":"2025-12-11T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.214449 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.214449 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.214536 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.214636 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:01:58 crc kubenswrapper[4877]: E1211 18:01:58.214767 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:01:58 crc kubenswrapper[4877]: E1211 18:01:58.215060 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:01:58 crc kubenswrapper[4877]: E1211 18:01:58.215191 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:01:58 crc kubenswrapper[4877]: E1211 18:01:58.215278 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.263604 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.263912 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.264055 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.264207 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.264324 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:58Z","lastTransitionTime":"2025-12-11T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.369376 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.369805 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.369992 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.370137 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.370262 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:58Z","lastTransitionTime":"2025-12-11T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.473137 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.473236 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.473257 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.473284 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.473308 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:58Z","lastTransitionTime":"2025-12-11T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.576017 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.576672 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.576792 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.576912 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.577028 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:58Z","lastTransitionTime":"2025-12-11T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.680140 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.680189 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.680204 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.680224 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.680238 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:58Z","lastTransitionTime":"2025-12-11T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.711840 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/3.log" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.715119 4877 scope.go:117] "RemoveContainer" containerID="d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c" Dec 11 18:01:58 crc kubenswrapper[4877]: E1211 18:01:58.715262 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.732801 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.753723 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.768421 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.782766 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.783152 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.783439 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.784497 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.784533 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:58Z","lastTransitionTime":"2025-12-11T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.790132 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.805564 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.820569 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b27506f1-6fe1-4b8f-8ec3-12b4ce0bb4a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7526cf2fa868a4216d213721f626e28ba5a54313d3b602d7f51f83821336bfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.834582 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.847551 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.862246 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.876928 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.887444 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.887507 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.887522 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.887547 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.887562 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:58Z","lastTransitionTime":"2025-12-11T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.891824 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.905095 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.926791 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.943117 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.956644 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27330020-559b-40cb-902a-044e06dee073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f148f68607cf145de77583ba5960130ecdf97055beee224305d3de94958c3353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dec39053fa9ed82953821ecdff4313b3856ec32e4772398c37d7ebcf1833d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4526853d8e97d7baaf38c3301e8068e57a196e912cefc5db6b261bf79087d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.972368 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.990729 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:58Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.991191 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.991263 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.991276 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.991296 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:58 crc kubenswrapper[4877]: I1211 18:01:58.991308 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:58Z","lastTransitionTime":"2025-12-11T18:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.007274 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a9c929cdad0d9629dbf6fc2c1fbd5a05440834612e6acd541b19af37d0378f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:46Z\\\",\\\"message\\\":\\\"2025-12-11T18:01:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a0c8272a-1bfc-4bbd-a001-df68d0374f74\\\\n2025-12-11T18:01:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a0c8272a-1bfc-4bbd-a001-df68d0374f74 to /host/opt/cni/bin/\\\\n2025-12-11T18:01:01Z [verbose] multus-daemon started\\\\n2025-12-11T18:01:01Z [verbose] Readiness Indicator file check\\\\n2025-12-11T18:01:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.035994 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:56Z\\\",\\\"message\\\":\\\":false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.93:5000:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {83c1e277-3d22-42ae-a355-f7a0ff0bd171}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 18:01:56.623724 6847 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z]\\\\nI1211 18:01:56.623726 6847 admin_network_policy_namespace.go:56] Finished syncing Namespace kube-node-lease Admin Network Policy controller: took 8.72µs\\\\nI1211 18:01:56.623738 6847 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-vers\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.093698 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.094023 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.094085 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.094150 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.094206 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:59Z","lastTransitionTime":"2025-12-11T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.196719 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.196765 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.196774 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.196794 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.196805 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:59Z","lastTransitionTime":"2025-12-11T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.238272 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b03ea4f-71aa-4fdc-b682-d3798c04ec8f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a492aaa725bcafae3125b038659ffac94112469ae935ece06f174b3cf762ceab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f36a4127220a2a7ff7dab9b8befa8c77e0200c1a585980ee78d0c2393706044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ac9b47ff1c60f32c361c77594804765e5f7354cfd8b563319b895fd4a9b2642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.255353 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dcb980386bd7092446180c96068eb675c03f0f43221689fd327db4ebe9df600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.266852 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjskq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcbb2f70-a54d-405a-b5f5-5857dd18b526\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07ed943044d1c52eba4170a5b50862bd53f54565ce0a135d279b9e6ea716f61e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lglq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjskq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.279888 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwfnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61afe7d0-ec5b-41aa-a8fb-6628b863a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a9c929cdad0d9629dbf6fc2c1fbd5a05440834612e6acd541b19af37d0378f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:46Z\\\",\\\"message\\\":\\\"2025-12-11T18:01:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a0c8272a-1bfc-4bbd-a001-df68d0374f74\\\\n2025-12-11T18:01:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a0c8272a-1bfc-4bbd-a001-df68d0374f74 to /host/opt/cni/bin/\\\\n2025-12-11T18:01:01Z [verbose] multus-daemon started\\\\n2025-12-11T18:01:01Z [verbose] Readiness Indicator file check\\\\n2025-12-11T18:01:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwfnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.299904 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.299975 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.299990 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.300018 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.300030 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:59Z","lastTransitionTime":"2025-12-11T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.301727 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea4114b7-a44c-4220-a321-9f18bbb90151\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T18:01:56Z\\\",\\\"message\\\":\\\":false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.93:5000:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {83c1e277-3d22-42ae-a355-f7a0ff0bd171}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1211 18:01:56.623724 6847 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:56Z is after 2025-08-24T17:21:41Z]\\\\nI1211 18:01:56.623726 6847 admin_network_policy_namespace.go:56] Finished syncing Namespace kube-node-lease Admin Network Policy controller: took 8.72µs\\\\nI1211 18:01:56.623738 6847 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-vers\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dvnsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvb5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.317308 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78f90c0d-ae7f-4ca3-acd8-2219b9316c52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dbf808168a78bce08270222c58ebfb33ac958a38bc5fce8247fd9dc94428d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3657c77767cb19c0fbb047553415d7b2c649bbe2cd589b588ff90d34de95a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgmv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qtfsf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.342290 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24b581d0-94a4-4993-9048-34b2a842542f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4288f53410afb2a2617d31471ee1c080c2168c16cbfb85d8cc91dc8c10465913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db913d67d7c7c886b8eaa999cabb12f14fc6da906262c7455ae1c94c9747fe5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f02936711e06d30590ba4a7376ab2914eaf2898f8b697a302824614f475678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf8f49b4cc18781641e0178ac3212742a17073f3f43c937a8dc73b71d1df036a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bdcfb2400211537dcaf6b1b570b62076e17527e8498075626de287a28c12b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afaa7bdb178515bfb0eabbd121625ba2f95183d05bfd259cdab6f1688c416dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98fce09bad0273767f6cc5b46757eae72d87c8e4b5cf5feb93ccecbd9fbf991b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b14875ace4ee478068a4ae1dc719c0dd3bd50aeec2f239729aab5a1b3b4b249f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.356798 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40acfff0-36b4-4de3-a570-498c52cabfa9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\" 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 18:00:58.504619 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 18:00:58.512279 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 18:00:58.512301 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 18:00:58.512311 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 18:00:58.512314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 18:00:58.512318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 18:00:58.512322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 18:00:58.512535 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1211 18:00:58.517424 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1211 18:00:58.517467 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-678379854/tls.crt::/tmp/serving-cert-678379854/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765476042\\\\\\\\\\\\\\\" (2025-12-11 18:00:42 +0000 UTC to 2026-01-10 18:00:43 +0000 UTC (now=2025-12-11 18:00:58.517428645 +0000 UTC))\\\\\\\"\\\\nI1211 18:00:58.517493 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 18:00:58.517544 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF1211 18:00:58.517601 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.368916 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27330020-559b-40cb-902a-044e06dee073\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f148f68607cf145de77583ba5960130ecdf97055beee224305d3de94958c3353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dec39053fa9ed82953821ecdff4313b3856ec32e4772398c37d7ebcf1833d55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4526853d8e97d7baaf38c3301e8068e57a196e912cefc5db6b261bf79087d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e4383bd992c11472d7af6417a65f7d45013122967ab863c7f721d6c44928bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.385307 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.402016 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.402101 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.402124 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.402152 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.402094 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0abbbd5228a278958f36844daff7dcc250423c51e97cfa412bd58470a9bd624a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.402175 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:59Z","lastTransitionTime":"2025-12-11T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.421174 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.435566 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f7502baa9d29f930cb305dca6243ef6d82d27a11870a646420092eb2fe6a425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5185d5448123a06c56a168cf156c1d9daa43bca1ac57b56c1f3906b61c202fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.450180 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.465038 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-665tk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0c29e17-9aad-46b1-bbff-eb00cc938537\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fabaf05a017b378ec824ef91527d9a98fa33cde04b1afb802b15c7b5e1326f26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510cfac7386ef86075ee50730a8002f744d3b521ff49ee1af7363d3d79f5699f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e4d0c7a6cb4e2496efedba5093309b312d85d5a8ee35a53f33a22b5522a12be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://791efa4d483aba374be1b457415ee5e96de50eb50e99f6d68807419cdd12d3f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9dd1a181a26d808bc8344b2f41382caa5d17b0834adafadfb927adeac0e263\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed1d8db42945b32334bd34f45b54da9647d7ad1a31dbec61be29cc14e904bcd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://942b7a08c3cd43b3f3775ed2c507a28b679ced342e08d6e7ea369a11d7b66df2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88rfm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-665tk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.478676 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4241e4fd6eab271e598df759ae1843851be34c42b49f82a6d4a79d41883cb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chx59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjnxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.492546 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b27506f1-6fe1-4b8f-8ec3-12b4ce0bb4a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7526cf2fa868a4216d213721f626e28ba5a54313d3b602d7f51f83821336bfff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a776c2497b0e8a220cb5d247a88e1940e5484fc003e3b87d97b382c699a27ad2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T18:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T18:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:00:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.503747 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtgjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c74b9aa5-bb2b-4d63-9ce6-ea21336f0741\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://163ae8057488cd9c24f618629cfb32870fcd3d7c345ddf2c59b3ecc9a856b81d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T18:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc8w4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtgjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.504809 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.504845 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.504857 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.504879 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.504892 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:59Z","lastTransitionTime":"2025-12-11T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.516419 4877 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0b7b99-8d0a-48ad-9f98-da5947644472\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T18:01:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkk5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T18:01:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sn9xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:01:59Z is after 2025-08-24T17:21:41Z" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.608368 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.608437 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.608446 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.608468 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.608480 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:59Z","lastTransitionTime":"2025-12-11T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.711242 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.711289 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.711299 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.711319 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.711329 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:59Z","lastTransitionTime":"2025-12-11T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.814249 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.814302 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.814313 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.814332 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.814344 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:59Z","lastTransitionTime":"2025-12-11T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.917295 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.917356 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.917365 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.917408 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:01:59 crc kubenswrapper[4877]: I1211 18:01:59.917420 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:01:59Z","lastTransitionTime":"2025-12-11T18:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.020291 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.020335 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.020343 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.020363 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.020392 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:00Z","lastTransitionTime":"2025-12-11T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.125259 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.125330 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.125340 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.125419 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.125435 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:00Z","lastTransitionTime":"2025-12-11T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.215317 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.215449 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.215543 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.215552 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:00 crc kubenswrapper[4877]: E1211 18:02:00.215485 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:00 crc kubenswrapper[4877]: E1211 18:02:00.215679 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:00 crc kubenswrapper[4877]: E1211 18:02:00.215896 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:00 crc kubenswrapper[4877]: E1211 18:02:00.216097 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.228199 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.228236 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.228253 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.228272 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.228288 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:00Z","lastTransitionTime":"2025-12-11T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.331208 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.331673 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.331793 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.331912 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.331989 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:00Z","lastTransitionTime":"2025-12-11T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.434878 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.434932 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.434943 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.434968 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.434977 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:00Z","lastTransitionTime":"2025-12-11T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.537913 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.537966 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.538000 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.538024 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.538044 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:00Z","lastTransitionTime":"2025-12-11T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.641191 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.641258 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.641268 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.641306 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.641320 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:00Z","lastTransitionTime":"2025-12-11T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.744465 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.744540 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.744574 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.744598 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.744617 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:00Z","lastTransitionTime":"2025-12-11T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.847162 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.847205 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.847215 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.847233 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.847246 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:00Z","lastTransitionTime":"2025-12-11T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.950274 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.950318 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.950330 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.950350 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:00 crc kubenswrapper[4877]: I1211 18:02:00.950367 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:00Z","lastTransitionTime":"2025-12-11T18:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.053951 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.053993 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.054002 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.054018 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.054029 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:01Z","lastTransitionTime":"2025-12-11T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.157166 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.157207 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.157216 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.157235 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.157248 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:01Z","lastTransitionTime":"2025-12-11T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.260478 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.260530 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.260541 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.260558 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.260571 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:01Z","lastTransitionTime":"2025-12-11T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.363818 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.363884 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.363895 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.363917 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.363930 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:01Z","lastTransitionTime":"2025-12-11T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.466971 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.467307 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.467365 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.467473 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.467562 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:01Z","lastTransitionTime":"2025-12-11T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.571584 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.571649 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.571668 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.571700 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.571719 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:01Z","lastTransitionTime":"2025-12-11T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.674348 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.674445 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.674456 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.674474 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.674486 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:01Z","lastTransitionTime":"2025-12-11T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.776910 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.776971 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.776982 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.777004 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.777018 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:01Z","lastTransitionTime":"2025-12-11T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.882085 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.882165 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.882183 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.882213 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.882233 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:01Z","lastTransitionTime":"2025-12-11T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.984651 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.984716 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.984737 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.984761 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:01 crc kubenswrapper[4877]: I1211 18:02:01.984780 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:01Z","lastTransitionTime":"2025-12-11T18:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.087204 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.087252 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.087263 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.087296 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.087324 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:02Z","lastTransitionTime":"2025-12-11T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.190161 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.190209 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.190272 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.190299 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.190315 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:02Z","lastTransitionTime":"2025-12-11T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.215299 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.215324 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.215345 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.215415 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.215475 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.215647 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.215775 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.215864 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.280028 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.280335 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.280294463 +0000 UTC m=+147.306538517 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.293294 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.293352 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.293366 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.293408 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.293426 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:02Z","lastTransitionTime":"2025-12-11T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.381667 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.381774 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.381821 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.381902 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.382063 4877 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.382071 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.382214 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.382301 4877 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.382076 4877 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.382174 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.382139247 +0000 UTC m=+147.408383341 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.382543 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.382516526 +0000 UTC m=+147.408760610 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.382582 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.382559857 +0000 UTC m=+147.408803951 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.383094 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.383153 4877 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.383169 4877 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:02:02 crc kubenswrapper[4877]: E1211 18:02:02.383282 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.383248815 +0000 UTC m=+147.409492939 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.397268 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.397369 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.397490 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.397530 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.397558 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:02Z","lastTransitionTime":"2025-12-11T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.500149 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.500196 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.500207 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.500224 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.500236 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:02Z","lastTransitionTime":"2025-12-11T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.602874 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.602959 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.602973 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.603009 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.603026 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:02Z","lastTransitionTime":"2025-12-11T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.707161 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.707218 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.707236 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.707262 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.707279 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:02Z","lastTransitionTime":"2025-12-11T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.812435 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.812500 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.812520 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.812549 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.812571 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:02Z","lastTransitionTime":"2025-12-11T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.915206 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.915305 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.915332 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.915368 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:02 crc kubenswrapper[4877]: I1211 18:02:02.915512 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:02Z","lastTransitionTime":"2025-12-11T18:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.018851 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.018902 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.018912 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.018929 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.018943 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:03Z","lastTransitionTime":"2025-12-11T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.122309 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.122438 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.122467 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.122499 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.122522 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:03Z","lastTransitionTime":"2025-12-11T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.224757 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.224823 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.224845 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.224872 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.224894 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:03Z","lastTransitionTime":"2025-12-11T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.328003 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.328061 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.328077 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.328100 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.328116 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:03Z","lastTransitionTime":"2025-12-11T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.431869 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.431925 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.431936 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.431957 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.431969 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:03Z","lastTransitionTime":"2025-12-11T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.535938 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.536018 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.536036 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.536061 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.536079 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:03Z","lastTransitionTime":"2025-12-11T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.638795 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.638850 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.638866 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.638890 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.638908 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:03Z","lastTransitionTime":"2025-12-11T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.742273 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.742355 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.742390 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.742417 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.742435 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:03Z","lastTransitionTime":"2025-12-11T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.845479 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.845536 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.845552 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.845571 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.845583 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:03Z","lastTransitionTime":"2025-12-11T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.948342 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.948420 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.948429 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.948449 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:03 crc kubenswrapper[4877]: I1211 18:02:03.948460 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:03Z","lastTransitionTime":"2025-12-11T18:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.051892 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.051948 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.051958 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.051977 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.051988 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:04Z","lastTransitionTime":"2025-12-11T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.062650 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.062718 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.062741 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.062775 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.062802 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:04Z","lastTransitionTime":"2025-12-11T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:04 crc kubenswrapper[4877]: E1211 18:02:04.080867 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:02:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.085473 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.085555 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.085574 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.085603 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.085623 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:04Z","lastTransitionTime":"2025-12-11T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:04 crc kubenswrapper[4877]: E1211 18:02:04.107309 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:02:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.113249 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.113299 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.113315 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.113343 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.113361 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:04Z","lastTransitionTime":"2025-12-11T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:04 crc kubenswrapper[4877]: E1211 18:02:04.132423 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:02:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.138485 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.138553 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.138574 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.138604 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.138621 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:04Z","lastTransitionTime":"2025-12-11T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:04 crc kubenswrapper[4877]: E1211 18:02:04.159894 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:02:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.167899 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.167974 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.167994 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.168025 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.168048 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:04Z","lastTransitionTime":"2025-12-11T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:04 crc kubenswrapper[4877]: E1211 18:02:04.187072 4877 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T18:02:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0463d847-29f0-4a7f-a5d9-324258f999bf\\\",\\\"systemUUID\\\":\\\"c213a7b6-d969-4368-856f-6ea24dcb0da0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T18:02:04Z is after 2025-08-24T17:21:41Z" Dec 11 18:02:04 crc kubenswrapper[4877]: E1211 18:02:04.187237 4877 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.189938 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.189993 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.190040 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.190070 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.190143 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:04Z","lastTransitionTime":"2025-12-11T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.214804 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:04 crc kubenswrapper[4877]: E1211 18:02:04.215149 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.215486 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:04 crc kubenswrapper[4877]: E1211 18:02:04.215609 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.215706 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.215718 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:04 crc kubenswrapper[4877]: E1211 18:02:04.215876 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:04 crc kubenswrapper[4877]: E1211 18:02:04.215993 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.292909 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.292968 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.292979 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.292999 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.293010 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:04Z","lastTransitionTime":"2025-12-11T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.394818 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.394879 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.394907 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.394934 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.394954 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:04Z","lastTransitionTime":"2025-12-11T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.498137 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.498201 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.498211 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.498230 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.498242 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:04Z","lastTransitionTime":"2025-12-11T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.601403 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.601481 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.601504 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.601536 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.601556 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:04Z","lastTransitionTime":"2025-12-11T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.705968 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.706012 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.706024 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.706039 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.706052 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:04Z","lastTransitionTime":"2025-12-11T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.809317 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.809429 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.809449 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.809483 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.809503 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:04Z","lastTransitionTime":"2025-12-11T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.913276 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.913365 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.913416 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.913443 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:04 crc kubenswrapper[4877]: I1211 18:02:04.913461 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:04Z","lastTransitionTime":"2025-12-11T18:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.015734 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.015801 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.015818 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.015847 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.015865 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:05Z","lastTransitionTime":"2025-12-11T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.118783 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.118834 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.118846 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.118864 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.118883 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:05Z","lastTransitionTime":"2025-12-11T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.220829 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.220887 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.220897 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.220915 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.220927 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:05Z","lastTransitionTime":"2025-12-11T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.323796 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.323846 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.323855 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.323875 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.323888 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:05Z","lastTransitionTime":"2025-12-11T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.427595 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.427649 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.427661 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.427686 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.427702 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:05Z","lastTransitionTime":"2025-12-11T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.530532 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.530586 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.530598 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.530618 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.530634 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:05Z","lastTransitionTime":"2025-12-11T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.633995 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.634037 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.634048 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.634067 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.634081 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:05Z","lastTransitionTime":"2025-12-11T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.737118 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.737185 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.737198 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.737221 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.737234 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:05Z","lastTransitionTime":"2025-12-11T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.841596 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.841672 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.841687 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.841714 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.841733 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:05Z","lastTransitionTime":"2025-12-11T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.945098 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.945168 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.945177 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.945205 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:05 crc kubenswrapper[4877]: I1211 18:02:05.945218 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:05Z","lastTransitionTime":"2025-12-11T18:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.048115 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.048162 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.048171 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.048188 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.048199 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:06Z","lastTransitionTime":"2025-12-11T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.151846 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.151916 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.151930 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.151954 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.151969 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:06Z","lastTransitionTime":"2025-12-11T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.214569 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.214605 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.214598 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.214569 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:06 crc kubenswrapper[4877]: E1211 18:02:06.214746 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:06 crc kubenswrapper[4877]: E1211 18:02:06.214921 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:06 crc kubenswrapper[4877]: E1211 18:02:06.214969 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:06 crc kubenswrapper[4877]: E1211 18:02:06.215020 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.255214 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.255272 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.255286 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.255307 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.255321 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:06Z","lastTransitionTime":"2025-12-11T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.358465 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.358508 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.358518 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.358543 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.358557 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:06Z","lastTransitionTime":"2025-12-11T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.461259 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.461317 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.461326 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.461343 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.461354 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:06Z","lastTransitionTime":"2025-12-11T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.564205 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.564251 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.564261 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.564276 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.564287 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:06Z","lastTransitionTime":"2025-12-11T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.667589 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.667629 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.667638 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.667656 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.667666 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:06Z","lastTransitionTime":"2025-12-11T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.770674 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.770734 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.770746 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.770764 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.770778 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:06Z","lastTransitionTime":"2025-12-11T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.873947 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.873996 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.874008 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.874030 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.874044 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:06Z","lastTransitionTime":"2025-12-11T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.976790 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.976854 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.976871 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.976895 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:06 crc kubenswrapper[4877]: I1211 18:02:06.976912 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:06Z","lastTransitionTime":"2025-12-11T18:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.080296 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.080428 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.080454 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.080486 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.080512 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:07Z","lastTransitionTime":"2025-12-11T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.183574 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.183635 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.183648 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.183675 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.183687 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:07Z","lastTransitionTime":"2025-12-11T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.287365 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.287451 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.287465 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.287486 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.287497 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:07Z","lastTransitionTime":"2025-12-11T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.390700 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.390750 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.390760 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.390779 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.390790 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:07Z","lastTransitionTime":"2025-12-11T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.493846 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.493894 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.493905 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.493923 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.493934 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:07Z","lastTransitionTime":"2025-12-11T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.597703 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.597762 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.597780 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.597807 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.597831 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:07Z","lastTransitionTime":"2025-12-11T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.701194 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.701292 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.701338 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.701367 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.701433 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:07Z","lastTransitionTime":"2025-12-11T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.804572 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.805002 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.805134 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.805265 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.805456 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:07Z","lastTransitionTime":"2025-12-11T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.908848 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.909205 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.909354 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.910066 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:07 crc kubenswrapper[4877]: I1211 18:02:07.910419 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:07Z","lastTransitionTime":"2025-12-11T18:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.020608 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.020676 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.020705 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.020731 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.020749 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:08Z","lastTransitionTime":"2025-12-11T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.124446 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.124504 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.124521 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.124544 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.124562 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:08Z","lastTransitionTime":"2025-12-11T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.214985 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.215041 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:08 crc kubenswrapper[4877]: E1211 18:02:08.215190 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.215211 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.215323 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:08 crc kubenswrapper[4877]: E1211 18:02:08.215567 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:08 crc kubenswrapper[4877]: E1211 18:02:08.215695 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:08 crc kubenswrapper[4877]: E1211 18:02:08.215823 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.228013 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.228063 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.228079 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.228102 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.228119 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:08Z","lastTransitionTime":"2025-12-11T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.331355 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.331468 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.331491 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.331521 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.331544 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:08Z","lastTransitionTime":"2025-12-11T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.434342 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.434483 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.434506 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.434540 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.434563 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:08Z","lastTransitionTime":"2025-12-11T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.537824 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.537868 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.537879 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.537898 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.537911 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:08Z","lastTransitionTime":"2025-12-11T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.640911 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.641007 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.641025 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.641052 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.641071 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:08Z","lastTransitionTime":"2025-12-11T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.744835 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.744902 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.744925 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.744956 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.744980 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:08Z","lastTransitionTime":"2025-12-11T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.848504 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.848584 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.848676 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.848708 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.848733 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:08Z","lastTransitionTime":"2025-12-11T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.951518 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.951629 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.951647 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.951672 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:08 crc kubenswrapper[4877]: I1211 18:02:08.951686 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:08Z","lastTransitionTime":"2025-12-11T18:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.055876 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.056069 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.056145 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.056183 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.056207 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:09Z","lastTransitionTime":"2025-12-11T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.159108 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.159173 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.159191 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.159216 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.159237 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:09Z","lastTransitionTime":"2025-12-11T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.215551 4877 scope.go:117] "RemoveContainer" containerID="d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c" Dec 11 18:02:09 crc kubenswrapper[4877]: E1211 18:02:09.215733 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.246207 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=23.246183874 podStartE2EDuration="23.246183874s" podCreationTimestamp="2025-12-11 18:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:02:09.246164293 +0000 UTC m=+90.272408377" watchObservedRunningTime="2025-12-11 18:02:09.246183874 +0000 UTC m=+90.272427908" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.264756 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.265015 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.265090 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.265163 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.265185 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:09Z","lastTransitionTime":"2025-12-11T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.267326 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-dtgjg" podStartSLOduration=69.267298444 podStartE2EDuration="1m9.267298444s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:02:09.267205131 +0000 UTC m=+90.293449215" watchObservedRunningTime="2025-12-11 18:02:09.267298444 +0000 UTC m=+90.293542488" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.315903 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=70.315882776 podStartE2EDuration="1m10.315882776s" podCreationTimestamp="2025-12-11 18:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:02:09.31564615 +0000 UTC m=+90.341890194" watchObservedRunningTime="2025-12-11 18:02:09.315882776 +0000 UTC m=+90.342126820" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.365687 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qtfsf" podStartSLOduration=69.365664638 podStartE2EDuration="1m9.365664638s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:02:09.363705818 +0000 UTC m=+90.389949882" watchObservedRunningTime="2025-12-11 18:02:09.365664638 +0000 UTC m=+90.391908682" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.365966 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vjskq" podStartSLOduration=69.365959486 podStartE2EDuration="1m9.365959486s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:02:09.345310428 +0000 UTC m=+90.371554472" watchObservedRunningTime="2025-12-11 18:02:09.365959486 +0000 UTC m=+90.392203530" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.368674 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.368720 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.368732 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.368756 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.368770 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:09Z","lastTransitionTime":"2025-12-11T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.401872 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=69.401850483 podStartE2EDuration="1m9.401850483s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:02:09.400674183 +0000 UTC m=+90.426918237" watchObservedRunningTime="2025-12-11 18:02:09.401850483 +0000 UTC m=+90.428094527" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.429970 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.429946062 podStartE2EDuration="1m10.429946062s" podCreationTimestamp="2025-12-11 18:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:02:09.429762337 +0000 UTC m=+90.456006401" watchObservedRunningTime="2025-12-11 18:02:09.429946062 +0000 UTC m=+90.456190106" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.472198 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.472251 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.472262 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.472282 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.472295 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:09Z","lastTransitionTime":"2025-12-11T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.478073 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=36.478048751 podStartE2EDuration="36.478048751s" podCreationTimestamp="2025-12-11 18:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:02:09.450369164 +0000 UTC m=+90.476613208" watchObservedRunningTime="2025-12-11 18:02:09.478048751 +0000 UTC m=+90.504292825" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.511812 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gwfnt" podStartSLOduration=69.511790834 podStartE2EDuration="1m9.511790834s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:02:09.511217009 +0000 UTC m=+90.537461053" watchObservedRunningTime="2025-12-11 18:02:09.511790834 +0000 UTC m=+90.538034878" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.575220 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.575260 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.575269 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.575283 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.575294 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:09Z","lastTransitionTime":"2025-12-11T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.669835 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podStartSLOduration=69.669806173 podStartE2EDuration="1m9.669806173s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:02:09.669188968 +0000 UTC m=+90.695433012" watchObservedRunningTime="2025-12-11 18:02:09.669806173 +0000 UTC m=+90.696050277" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.671184 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-665tk" podStartSLOduration=69.671170718 podStartE2EDuration="1m9.671170718s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:02:09.624798843 +0000 UTC m=+90.651042947" watchObservedRunningTime="2025-12-11 18:02:09.671170718 +0000 UTC m=+90.697414832" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.678519 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.678578 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.678594 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.678622 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.678641 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:09Z","lastTransitionTime":"2025-12-11T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.781213 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.781286 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.781310 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.781338 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.781356 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:09Z","lastTransitionTime":"2025-12-11T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.884209 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.884526 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.884535 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.884553 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.884564 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:09Z","lastTransitionTime":"2025-12-11T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.987464 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.987507 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.987519 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.987536 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:09 crc kubenswrapper[4877]: I1211 18:02:09.987550 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:09Z","lastTransitionTime":"2025-12-11T18:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.090767 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.090799 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.090808 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.090822 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.090832 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:10Z","lastTransitionTime":"2025-12-11T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.193280 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.193350 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.193407 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.193441 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.193464 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:10Z","lastTransitionTime":"2025-12-11T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.215124 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.215252 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.215134 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:10 crc kubenswrapper[4877]: E1211 18:02:10.215350 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.215132 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:10 crc kubenswrapper[4877]: E1211 18:02:10.215536 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:10 crc kubenswrapper[4877]: E1211 18:02:10.215695 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:10 crc kubenswrapper[4877]: E1211 18:02:10.215814 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.296659 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.296700 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.296711 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.296726 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.296739 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:10Z","lastTransitionTime":"2025-12-11T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.399490 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.400347 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.400680 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.400873 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.401010 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:10Z","lastTransitionTime":"2025-12-11T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.504554 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.504606 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.504620 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.504638 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.504651 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:10Z","lastTransitionTime":"2025-12-11T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.608222 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.608299 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.608319 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.608357 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.608446 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:10Z","lastTransitionTime":"2025-12-11T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.711220 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.711280 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.711297 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.711321 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.711338 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:10Z","lastTransitionTime":"2025-12-11T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.813986 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.814048 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.814069 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.814095 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.814114 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:10Z","lastTransitionTime":"2025-12-11T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.916874 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.916927 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.916939 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.916954 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:10 crc kubenswrapper[4877]: I1211 18:02:10.916965 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:10Z","lastTransitionTime":"2025-12-11T18:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.020204 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.020259 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.020271 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.020291 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.020303 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:11Z","lastTransitionTime":"2025-12-11T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.124131 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.124198 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.124218 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.124247 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.124266 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:11Z","lastTransitionTime":"2025-12-11T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.226933 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.227006 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.227019 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.227037 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.227051 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:11Z","lastTransitionTime":"2025-12-11T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.331058 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.331123 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.331139 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.331170 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.331189 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:11Z","lastTransitionTime":"2025-12-11T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.433923 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.433992 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.434013 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.434041 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.434063 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:11Z","lastTransitionTime":"2025-12-11T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.537683 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.537749 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.537768 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.537792 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.537809 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:11Z","lastTransitionTime":"2025-12-11T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.640306 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.640358 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.640381 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.640404 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.640416 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:11Z","lastTransitionTime":"2025-12-11T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.743887 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.743960 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.743971 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.743987 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.743997 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:11Z","lastTransitionTime":"2025-12-11T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.847906 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.847952 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.847970 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.847987 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.848000 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:11Z","lastTransitionTime":"2025-12-11T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.950871 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.950938 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.950953 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.950974 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:11 crc kubenswrapper[4877]: I1211 18:02:11.950990 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:11Z","lastTransitionTime":"2025-12-11T18:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.054842 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.054895 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.054905 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.054923 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.054937 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:12Z","lastTransitionTime":"2025-12-11T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.157696 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.157752 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.157766 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.157791 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.157808 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:12Z","lastTransitionTime":"2025-12-11T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.214338 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.214426 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.214478 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:12 crc kubenswrapper[4877]: E1211 18:02:12.214537 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.214552 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:12 crc kubenswrapper[4877]: E1211 18:02:12.214683 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:12 crc kubenswrapper[4877]: E1211 18:02:12.214844 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:12 crc kubenswrapper[4877]: E1211 18:02:12.215015 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.260773 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.260833 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.260842 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.260861 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.260872 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:12Z","lastTransitionTime":"2025-12-11T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.364364 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.364431 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.364440 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.364458 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.364468 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:12Z","lastTransitionTime":"2025-12-11T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.467482 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.467531 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.467547 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.467568 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.467583 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:12Z","lastTransitionTime":"2025-12-11T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.570633 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.570684 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.570695 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.570714 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.570750 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:12Z","lastTransitionTime":"2025-12-11T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.674014 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.674061 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.674076 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.674091 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.674104 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:12Z","lastTransitionTime":"2025-12-11T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.781113 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.781189 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.781213 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.781245 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.781268 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:12Z","lastTransitionTime":"2025-12-11T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.884042 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.884108 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.884118 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.884139 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.884151 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:12Z","lastTransitionTime":"2025-12-11T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.987218 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.987272 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.987287 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.987330 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:12 crc kubenswrapper[4877]: I1211 18:02:12.987343 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:12Z","lastTransitionTime":"2025-12-11T18:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.090333 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.090450 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.090472 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.090499 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.090517 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:13Z","lastTransitionTime":"2025-12-11T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.193866 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.193932 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.193951 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.193978 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.194000 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:13Z","lastTransitionTime":"2025-12-11T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.296822 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.296888 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.296907 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.296935 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.296954 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:13Z","lastTransitionTime":"2025-12-11T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.401285 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.401365 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.401427 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.401464 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.401493 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:13Z","lastTransitionTime":"2025-12-11T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.505863 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.505934 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.505953 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.505984 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.506003 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:13Z","lastTransitionTime":"2025-12-11T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.609510 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.609606 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.609632 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.609663 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.609688 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:13Z","lastTransitionTime":"2025-12-11T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.712120 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.712184 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.712201 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.712221 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.712234 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:13Z","lastTransitionTime":"2025-12-11T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.814681 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.814739 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.814752 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.814770 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.814784 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:13Z","lastTransitionTime":"2025-12-11T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.918078 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.918144 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.918157 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.918179 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:13 crc kubenswrapper[4877]: I1211 18:02:13.918195 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:13Z","lastTransitionTime":"2025-12-11T18:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.020528 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.020563 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.020572 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.020588 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.020602 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:14Z","lastTransitionTime":"2025-12-11T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.123983 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.124053 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.124067 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.124084 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.124097 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:14Z","lastTransitionTime":"2025-12-11T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.214810 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.214922 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.214922 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:14 crc kubenswrapper[4877]: E1211 18:02:14.215001 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:14 crc kubenswrapper[4877]: E1211 18:02:14.215107 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.215152 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:14 crc kubenswrapper[4877]: E1211 18:02:14.215214 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:14 crc kubenswrapper[4877]: E1211 18:02:14.215327 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.226932 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.226974 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.226984 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.227000 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.227012 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:14Z","lastTransitionTime":"2025-12-11T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.329765 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.329829 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.329839 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.329875 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.329888 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:14Z","lastTransitionTime":"2025-12-11T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.432799 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.432865 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.432874 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.432893 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.432903 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:14Z","lastTransitionTime":"2025-12-11T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.535749 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.535816 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.535829 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.535847 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.535858 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:14Z","lastTransitionTime":"2025-12-11T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.557165 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.557247 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.557266 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.557318 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.557338 4877 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T18:02:14Z","lastTransitionTime":"2025-12-11T18:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.609117 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57"] Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.609614 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.612325 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.612532 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.612867 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.614021 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.730776 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.730861 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.730900 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.730955 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.730991 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.831816 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.831885 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.831940 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.831966 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.832011 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.832139 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.832195 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.833402 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.844416 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.864275 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fqk57\" (UID: \"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: I1211 18:02:14.929349 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" Dec 11 18:02:14 crc kubenswrapper[4877]: W1211 18:02:14.948568 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5099cdcf_4fe6_4dd4_83be_d8ec5b4b4d90.slice/crio-a3e67481cf67b639d187fa218df043db6cfec9b5ea1e991389f6e6b627745060 WatchSource:0}: Error finding container a3e67481cf67b639d187fa218df043db6cfec9b5ea1e991389f6e6b627745060: Status 404 returned error can't find the container with id a3e67481cf67b639d187fa218df043db6cfec9b5ea1e991389f6e6b627745060 Dec 11 18:02:15 crc kubenswrapper[4877]: I1211 18:02:15.804766 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" event={"ID":"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90","Type":"ContainerStarted","Data":"a2a3c6bbabd83b887d20d34ce27355cdcb6d2f4673567eb024a15079acb8b217"} Dec 11 18:02:15 crc kubenswrapper[4877]: I1211 18:02:15.805159 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" event={"ID":"5099cdcf-4fe6-4dd4-83be-d8ec5b4b4d90","Type":"ContainerStarted","Data":"a3e67481cf67b639d187fa218df043db6cfec9b5ea1e991389f6e6b627745060"} Dec 11 18:02:15 crc kubenswrapper[4877]: I1211 18:02:15.826023 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fqk57" podStartSLOduration=75.825999125 podStartE2EDuration="1m15.825999125s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:02:15.825332188 +0000 UTC m=+96.851576252" watchObservedRunningTime="2025-12-11 18:02:15.825999125 +0000 UTC m=+96.852243189" Dec 11 18:02:16 crc kubenswrapper[4877]: I1211 18:02:16.215201 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:16 crc kubenswrapper[4877]: I1211 18:02:16.215252 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:16 crc kubenswrapper[4877]: I1211 18:02:16.215273 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:16 crc kubenswrapper[4877]: I1211 18:02:16.215321 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:16 crc kubenswrapper[4877]: E1211 18:02:16.215364 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:16 crc kubenswrapper[4877]: E1211 18:02:16.215510 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:16 crc kubenswrapper[4877]: E1211 18:02:16.215655 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:16 crc kubenswrapper[4877]: E1211 18:02:16.215689 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:18 crc kubenswrapper[4877]: I1211 18:02:18.214336 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:18 crc kubenswrapper[4877]: I1211 18:02:18.214403 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:18 crc kubenswrapper[4877]: I1211 18:02:18.214339 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:18 crc kubenswrapper[4877]: E1211 18:02:18.214519 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:18 crc kubenswrapper[4877]: I1211 18:02:18.214502 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:18 crc kubenswrapper[4877]: E1211 18:02:18.214596 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:18 crc kubenswrapper[4877]: E1211 18:02:18.214939 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:18 crc kubenswrapper[4877]: E1211 18:02:18.215438 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:18 crc kubenswrapper[4877]: I1211 18:02:18.578889 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs\") pod \"network-metrics-daemon-sn9xv\" (UID: \"fa0b7b99-8d0a-48ad-9f98-da5947644472\") " pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:18 crc kubenswrapper[4877]: E1211 18:02:18.579151 4877 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:02:18 crc kubenswrapper[4877]: E1211 18:02:18.579277 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs podName:fa0b7b99-8d0a-48ad-9f98-da5947644472 nodeName:}" failed. No retries permitted until 2025-12-11 18:03:22.579250778 +0000 UTC m=+163.605494882 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs") pod "network-metrics-daemon-sn9xv" (UID: "fa0b7b99-8d0a-48ad-9f98-da5947644472") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 18:02:20 crc kubenswrapper[4877]: I1211 18:02:20.214288 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:20 crc kubenswrapper[4877]: I1211 18:02:20.214408 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:20 crc kubenswrapper[4877]: I1211 18:02:20.214512 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:20 crc kubenswrapper[4877]: I1211 18:02:20.214425 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:20 crc kubenswrapper[4877]: E1211 18:02:20.214597 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:20 crc kubenswrapper[4877]: E1211 18:02:20.214441 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:20 crc kubenswrapper[4877]: E1211 18:02:20.214766 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:20 crc kubenswrapper[4877]: E1211 18:02:20.214870 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:21 crc kubenswrapper[4877]: I1211 18:02:21.215955 4877 scope.go:117] "RemoveContainer" containerID="d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c" Dec 11 18:02:21 crc kubenswrapper[4877]: E1211 18:02:21.216247 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" Dec 11 18:02:22 crc kubenswrapper[4877]: I1211 18:02:22.215159 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:22 crc kubenswrapper[4877]: I1211 18:02:22.215560 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:22 crc kubenswrapper[4877]: I1211 18:02:22.215406 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:22 crc kubenswrapper[4877]: I1211 18:02:22.215325 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:22 crc kubenswrapper[4877]: E1211 18:02:22.215919 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:22 crc kubenswrapper[4877]: E1211 18:02:22.216061 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:22 crc kubenswrapper[4877]: E1211 18:02:22.216674 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:22 crc kubenswrapper[4877]: E1211 18:02:22.216879 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:24 crc kubenswrapper[4877]: I1211 18:02:24.215361 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:24 crc kubenswrapper[4877]: I1211 18:02:24.215489 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:24 crc kubenswrapper[4877]: I1211 18:02:24.215533 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:24 crc kubenswrapper[4877]: E1211 18:02:24.215566 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:24 crc kubenswrapper[4877]: I1211 18:02:24.215493 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:24 crc kubenswrapper[4877]: E1211 18:02:24.215678 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:24 crc kubenswrapper[4877]: E1211 18:02:24.215868 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:24 crc kubenswrapper[4877]: E1211 18:02:24.216000 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:26 crc kubenswrapper[4877]: I1211 18:02:26.214259 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:26 crc kubenswrapper[4877]: I1211 18:02:26.214312 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:26 crc kubenswrapper[4877]: I1211 18:02:26.214259 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:26 crc kubenswrapper[4877]: I1211 18:02:26.214297 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:26 crc kubenswrapper[4877]: E1211 18:02:26.214496 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:26 crc kubenswrapper[4877]: E1211 18:02:26.214529 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:26 crc kubenswrapper[4877]: E1211 18:02:26.214655 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:26 crc kubenswrapper[4877]: E1211 18:02:26.214941 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:28 crc kubenswrapper[4877]: I1211 18:02:28.214739 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:28 crc kubenswrapper[4877]: I1211 18:02:28.214808 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:28 crc kubenswrapper[4877]: I1211 18:02:28.214828 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:28 crc kubenswrapper[4877]: I1211 18:02:28.214768 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:28 crc kubenswrapper[4877]: E1211 18:02:28.214932 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:28 crc kubenswrapper[4877]: E1211 18:02:28.215004 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:28 crc kubenswrapper[4877]: E1211 18:02:28.215143 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:28 crc kubenswrapper[4877]: E1211 18:02:28.215368 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:30 crc kubenswrapper[4877]: I1211 18:02:30.215246 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:30 crc kubenswrapper[4877]: I1211 18:02:30.215281 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:30 crc kubenswrapper[4877]: I1211 18:02:30.215357 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:30 crc kubenswrapper[4877]: I1211 18:02:30.215425 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:30 crc kubenswrapper[4877]: E1211 18:02:30.215419 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:30 crc kubenswrapper[4877]: E1211 18:02:30.215579 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:30 crc kubenswrapper[4877]: E1211 18:02:30.215694 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:30 crc kubenswrapper[4877]: E1211 18:02:30.215774 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:32 crc kubenswrapper[4877]: I1211 18:02:32.214518 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:32 crc kubenswrapper[4877]: I1211 18:02:32.214519 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:32 crc kubenswrapper[4877]: I1211 18:02:32.214569 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:32 crc kubenswrapper[4877]: I1211 18:02:32.214604 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:32 crc kubenswrapper[4877]: E1211 18:02:32.214718 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:32 crc kubenswrapper[4877]: E1211 18:02:32.214800 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:32 crc kubenswrapper[4877]: E1211 18:02:32.215058 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:32 crc kubenswrapper[4877]: E1211 18:02:32.215137 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:33 crc kubenswrapper[4877]: I1211 18:02:33.216308 4877 scope.go:117] "RemoveContainer" containerID="d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c" Dec 11 18:02:33 crc kubenswrapper[4877]: E1211 18:02:33.216710 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qvb5p_openshift-ovn-kubernetes(ea4114b7-a44c-4220-a321-9f18bbb90151)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" Dec 11 18:02:33 crc kubenswrapper[4877]: I1211 18:02:33.864268 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwfnt_61afe7d0-ec5b-41aa-a8fb-6628b863a59c/kube-multus/1.log" Dec 11 18:02:33 crc kubenswrapper[4877]: I1211 18:02:33.864794 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwfnt_61afe7d0-ec5b-41aa-a8fb-6628b863a59c/kube-multus/0.log" Dec 11 18:02:33 crc kubenswrapper[4877]: I1211 18:02:33.864855 4877 generic.go:334] "Generic (PLEG): container finished" podID="61afe7d0-ec5b-41aa-a8fb-6628b863a59c" containerID="9a9c929cdad0d9629dbf6fc2c1fbd5a05440834612e6acd541b19af37d0378f6" exitCode=1 Dec 11 18:02:33 crc kubenswrapper[4877]: I1211 18:02:33.864893 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwfnt" event={"ID":"61afe7d0-ec5b-41aa-a8fb-6628b863a59c","Type":"ContainerDied","Data":"9a9c929cdad0d9629dbf6fc2c1fbd5a05440834612e6acd541b19af37d0378f6"} Dec 11 18:02:33 crc kubenswrapper[4877]: I1211 18:02:33.864944 4877 scope.go:117] "RemoveContainer" containerID="e2781aade06db495258dc52bc072b16ccf41b522c6f922e284dceb42fd25f064" Dec 11 18:02:33 crc kubenswrapper[4877]: I1211 18:02:33.867634 4877 scope.go:117] "RemoveContainer" containerID="9a9c929cdad0d9629dbf6fc2c1fbd5a05440834612e6acd541b19af37d0378f6" Dec 11 18:02:33 crc kubenswrapper[4877]: E1211 18:02:33.868075 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gwfnt_openshift-multus(61afe7d0-ec5b-41aa-a8fb-6628b863a59c)\"" pod="openshift-multus/multus-gwfnt" podUID="61afe7d0-ec5b-41aa-a8fb-6628b863a59c" Dec 11 18:02:34 crc kubenswrapper[4877]: I1211 18:02:34.214912 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:34 crc kubenswrapper[4877]: I1211 18:02:34.214927 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:34 crc kubenswrapper[4877]: I1211 18:02:34.214996 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:34 crc kubenswrapper[4877]: I1211 18:02:34.215197 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:34 crc kubenswrapper[4877]: E1211 18:02:34.215180 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:34 crc kubenswrapper[4877]: E1211 18:02:34.215478 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:34 crc kubenswrapper[4877]: E1211 18:02:34.215555 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:34 crc kubenswrapper[4877]: E1211 18:02:34.215828 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:34 crc kubenswrapper[4877]: I1211 18:02:34.870270 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwfnt_61afe7d0-ec5b-41aa-a8fb-6628b863a59c/kube-multus/1.log" Dec 11 18:02:36 crc kubenswrapper[4877]: I1211 18:02:36.215129 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:36 crc kubenswrapper[4877]: E1211 18:02:36.215419 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:36 crc kubenswrapper[4877]: I1211 18:02:36.215547 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:36 crc kubenswrapper[4877]: I1211 18:02:36.215604 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:36 crc kubenswrapper[4877]: E1211 18:02:36.215709 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:36 crc kubenswrapper[4877]: I1211 18:02:36.215780 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:36 crc kubenswrapper[4877]: E1211 18:02:36.215863 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:36 crc kubenswrapper[4877]: E1211 18:02:36.215948 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:38 crc kubenswrapper[4877]: I1211 18:02:38.215060 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:38 crc kubenswrapper[4877]: I1211 18:02:38.215178 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:38 crc kubenswrapper[4877]: I1211 18:02:38.215223 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:38 crc kubenswrapper[4877]: I1211 18:02:38.215298 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:38 crc kubenswrapper[4877]: E1211 18:02:38.215429 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:38 crc kubenswrapper[4877]: E1211 18:02:38.215536 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:38 crc kubenswrapper[4877]: E1211 18:02:38.215740 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:38 crc kubenswrapper[4877]: E1211 18:02:38.215906 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:39 crc kubenswrapper[4877]: E1211 18:02:39.172365 4877 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 11 18:02:39 crc kubenswrapper[4877]: E1211 18:02:39.400847 4877 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 18:02:40 crc kubenswrapper[4877]: I1211 18:02:40.214533 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:40 crc kubenswrapper[4877]: I1211 18:02:40.214721 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:40 crc kubenswrapper[4877]: I1211 18:02:40.214581 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:40 crc kubenswrapper[4877]: I1211 18:02:40.214646 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:40 crc kubenswrapper[4877]: E1211 18:02:40.214904 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:40 crc kubenswrapper[4877]: E1211 18:02:40.215107 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:40 crc kubenswrapper[4877]: E1211 18:02:40.215187 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:40 crc kubenswrapper[4877]: E1211 18:02:40.215275 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:42 crc kubenswrapper[4877]: I1211 18:02:42.215320 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:42 crc kubenswrapper[4877]: I1211 18:02:42.215361 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:42 crc kubenswrapper[4877]: I1211 18:02:42.215432 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:42 crc kubenswrapper[4877]: E1211 18:02:42.215503 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:42 crc kubenswrapper[4877]: E1211 18:02:42.215567 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:42 crc kubenswrapper[4877]: I1211 18:02:42.215598 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:42 crc kubenswrapper[4877]: E1211 18:02:42.215721 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:42 crc kubenswrapper[4877]: E1211 18:02:42.215838 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:44 crc kubenswrapper[4877]: I1211 18:02:44.214723 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:44 crc kubenswrapper[4877]: I1211 18:02:44.214783 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:44 crc kubenswrapper[4877]: I1211 18:02:44.214908 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:44 crc kubenswrapper[4877]: E1211 18:02:44.214904 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:44 crc kubenswrapper[4877]: I1211 18:02:44.214977 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:44 crc kubenswrapper[4877]: E1211 18:02:44.215173 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:44 crc kubenswrapper[4877]: E1211 18:02:44.215236 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:44 crc kubenswrapper[4877]: E1211 18:02:44.215300 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:44 crc kubenswrapper[4877]: E1211 18:02:44.402194 4877 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 18:02:45 crc kubenswrapper[4877]: I1211 18:02:45.215930 4877 scope.go:117] "RemoveContainer" containerID="9a9c929cdad0d9629dbf6fc2c1fbd5a05440834612e6acd541b19af37d0378f6" Dec 11 18:02:45 crc kubenswrapper[4877]: I1211 18:02:45.907065 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwfnt_61afe7d0-ec5b-41aa-a8fb-6628b863a59c/kube-multus/1.log" Dec 11 18:02:45 crc kubenswrapper[4877]: I1211 18:02:45.907125 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwfnt" event={"ID":"61afe7d0-ec5b-41aa-a8fb-6628b863a59c","Type":"ContainerStarted","Data":"276da851410f19ec952a15ae96df11dd281e8aa6fd8e73b1987309da94e602f0"} Dec 11 18:02:46 crc kubenswrapper[4877]: I1211 18:02:46.215174 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:46 crc kubenswrapper[4877]: I1211 18:02:46.215303 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:46 crc kubenswrapper[4877]: I1211 18:02:46.215202 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:46 crc kubenswrapper[4877]: E1211 18:02:46.215421 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:46 crc kubenswrapper[4877]: I1211 18:02:46.215174 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:46 crc kubenswrapper[4877]: E1211 18:02:46.215564 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:46 crc kubenswrapper[4877]: E1211 18:02:46.215666 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:46 crc kubenswrapper[4877]: E1211 18:02:46.215708 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:48 crc kubenswrapper[4877]: I1211 18:02:48.214939 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:48 crc kubenswrapper[4877]: E1211 18:02:48.215082 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:48 crc kubenswrapper[4877]: I1211 18:02:48.215189 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:48 crc kubenswrapper[4877]: I1211 18:02:48.215609 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:48 crc kubenswrapper[4877]: I1211 18:02:48.215626 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:48 crc kubenswrapper[4877]: E1211 18:02:48.215710 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:48 crc kubenswrapper[4877]: E1211 18:02:48.215867 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:48 crc kubenswrapper[4877]: E1211 18:02:48.215860 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:48 crc kubenswrapper[4877]: I1211 18:02:48.216267 4877 scope.go:117] "RemoveContainer" containerID="d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c" Dec 11 18:02:48 crc kubenswrapper[4877]: I1211 18:02:48.919703 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/3.log" Dec 11 18:02:48 crc kubenswrapper[4877]: I1211 18:02:48.922845 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerStarted","Data":"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d"} Dec 11 18:02:48 crc kubenswrapper[4877]: I1211 18:02:48.923282 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:02:48 crc kubenswrapper[4877]: I1211 18:02:48.952005 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podStartSLOduration=108.951980655 podStartE2EDuration="1m48.951980655s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:02:48.9513912 +0000 UTC m=+129.977635254" watchObservedRunningTime="2025-12-11 18:02:48.951980655 +0000 UTC m=+129.978224699" Dec 11 18:02:49 crc kubenswrapper[4877]: I1211 18:02:49.105452 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sn9xv"] Dec 11 18:02:49 crc kubenswrapper[4877]: I1211 18:02:49.105622 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:49 crc kubenswrapper[4877]: E1211 18:02:49.105759 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:49 crc kubenswrapper[4877]: E1211 18:02:49.403213 4877 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 18:02:50 crc kubenswrapper[4877]: I1211 18:02:50.214687 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:50 crc kubenswrapper[4877]: E1211 18:02:50.214857 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:50 crc kubenswrapper[4877]: I1211 18:02:50.214879 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:50 crc kubenswrapper[4877]: E1211 18:02:50.215118 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:50 crc kubenswrapper[4877]: I1211 18:02:50.215137 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:50 crc kubenswrapper[4877]: E1211 18:02:50.215453 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:51 crc kubenswrapper[4877]: I1211 18:02:51.215093 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:51 crc kubenswrapper[4877]: E1211 18:02:51.216170 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:52 crc kubenswrapper[4877]: I1211 18:02:52.214521 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:52 crc kubenswrapper[4877]: E1211 18:02:52.214854 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:52 crc kubenswrapper[4877]: I1211 18:02:52.214573 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:52 crc kubenswrapper[4877]: E1211 18:02:52.215028 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:52 crc kubenswrapper[4877]: I1211 18:02:52.214563 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:52 crc kubenswrapper[4877]: E1211 18:02:52.215156 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:53 crc kubenswrapper[4877]: I1211 18:02:53.214779 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:53 crc kubenswrapper[4877]: E1211 18:02:53.214970 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sn9xv" podUID="fa0b7b99-8d0a-48ad-9f98-da5947644472" Dec 11 18:02:54 crc kubenswrapper[4877]: I1211 18:02:54.214899 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:54 crc kubenswrapper[4877]: I1211 18:02:54.214992 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:54 crc kubenswrapper[4877]: E1211 18:02:54.215059 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 18:02:54 crc kubenswrapper[4877]: I1211 18:02:54.214993 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:54 crc kubenswrapper[4877]: E1211 18:02:54.215166 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 18:02:54 crc kubenswrapper[4877]: E1211 18:02:54.215369 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.214716 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.219189 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.219588 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.382980 4877 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.430012 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7ffdw"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.431277 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.456763 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.456768 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.456784 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.456973 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.457562 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.458366 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.458708 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.459021 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.459683 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.460168 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.461234 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.466738 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.481771 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.482440 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.482451 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.482833 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.501406 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.501883 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.502681 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.503527 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.503998 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.504894 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-p7gb9"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.505572 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.512487 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.513227 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.513282 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jwq6x"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.514279 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.518013 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cxfwb"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.518419 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.518612 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kqnqb"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.519663 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.523457 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-7bj9b"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.523904 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4c92n"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.524577 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.525583 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-cvvxz"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.536442 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-89rk4"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.536752 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.537121 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.537389 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.537756 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.523938 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.539085 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cvvxz" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.526044 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/efaee247-0579-47df-b29f-a6009d7302c3-audit-dir\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.539263 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvd7m\" (UniqueName: \"kubernetes.io/projected/efaee247-0579-47df-b29f-a6009d7302c3-kube-api-access-cvd7m\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.539295 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.539329 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-audit\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.539395 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-etcd-serving-ca\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.539431 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/efaee247-0579-47df-b29f-a6009d7302c3-node-pullsecrets\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.539449 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-config\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.539467 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/efaee247-0579-47df-b29f-a6009d7302c3-encryption-config\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.539507 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-image-import-ca\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.539547 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efaee247-0579-47df-b29f-a6009d7302c3-etcd-client\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.539570 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efaee247-0579-47df-b29f-a6009d7302c3-serving-cert\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.539662 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.527300 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.539909 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.540090 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.527345 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.527404 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.527431 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.528967 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.530610 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.530660 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.545824 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.545954 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.546069 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.546182 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.546747 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.530698 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.530904 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.531181 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.531356 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.531499 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.531534 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.531819 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.531855 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.532250 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.536315 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.536360 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.536435 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.536484 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.536522 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.536575 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.536641 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.536773 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.536849 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.536922 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.536963 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.536994 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.537027 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.537060 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.537095 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.537133 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.537345 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.550160 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.537420 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.537468 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.537611 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.538041 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.568780 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.570136 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.570751 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.570919 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.572293 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.574468 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kfghc"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.575951 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.576966 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.577448 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.595185 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.595948 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.596276 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.599483 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.600280 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ftfkv"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.600830 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.601939 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.602161 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kfghc" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.602361 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.604186 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.604959 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.609693 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.612167 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.612255 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.612593 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.612736 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.612745 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.612794 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.612792 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.612755 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.612952 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.612671 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.613056 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.614531 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.614626 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.614761 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.614864 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-mdpb8"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.615598 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.617096 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.617822 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.619837 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.621744 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.621766 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.622644 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.624146 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.624630 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.624827 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.624830 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.624949 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.625083 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.625189 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.625273 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.626744 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.625328 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.625426 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.627250 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.628303 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.628812 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k2xpd"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.630870 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.635871 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.636047 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.636238 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.642313 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.642713 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643061 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643174 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvgx9\" (UniqueName: \"kubernetes.io/projected/e37c4bcf-f60b-44fa-b0c3-85eecf0e785b-kube-api-access-kvgx9\") pod \"cluster-image-registry-operator-dc59b4c8b-2c697\" (UID: \"e37c4bcf-f60b-44fa-b0c3-85eecf0e785b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643221 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-audit-dir\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643277 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-config\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643320 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643362 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643413 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-image-import-ca\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643474 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efaee247-0579-47df-b29f-a6009d7302c3-etcd-client\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643506 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643542 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e05b4cd-2477-4d69-804f-c8dc59d6da3d-config\") pod \"machine-api-operator-5694c8668f-4c92n\" (UID: \"1e05b4cd-2477-4d69-804f-c8dc59d6da3d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643576 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643609 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbd78\" (UniqueName: \"kubernetes.io/projected/df9b93ff-b380-4673-9fc2-6c50f0523377-kube-api-access-sbd78\") pod \"openshift-config-operator-7777fb866f-sjpc9\" (UID: \"df9b93ff-b380-4673-9fc2-6c50f0523377\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643649 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e37c4bcf-f60b-44fa-b0c3-85eecf0e785b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2c697\" (UID: \"e37c4bcf-f60b-44fa-b0c3-85eecf0e785b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643688 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-audit-policies\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643718 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbvzf\" (UniqueName: \"kubernetes.io/projected/1e05b4cd-2477-4d69-804f-c8dc59d6da3d-kube-api-access-nbvzf\") pod \"machine-api-operator-5694c8668f-4c92n\" (UID: \"1e05b4cd-2477-4d69-804f-c8dc59d6da3d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643756 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/efaee247-0579-47df-b29f-a6009d7302c3-audit-dir\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643783 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvd7m\" (UniqueName: \"kubernetes.io/projected/efaee247-0579-47df-b29f-a6009d7302c3-kube-api-access-cvd7m\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643887 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643947 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1e05b4cd-2477-4d69-804f-c8dc59d6da3d-images\") pod \"machine-api-operator-5694c8668f-4c92n\" (UID: \"1e05b4cd-2477-4d69-804f-c8dc59d6da3d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.643997 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-audit\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644038 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644104 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-etcd-serving-ca\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644154 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/efaee247-0579-47df-b29f-a6009d7302c3-node-pullsecrets\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644196 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/efaee247-0579-47df-b29f-a6009d7302c3-encryption-config\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644241 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9whj5\" (UniqueName: \"kubernetes.io/projected/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-kube-api-access-9whj5\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644282 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644316 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-config\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644322 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efaee247-0579-47df-b29f-a6009d7302c3-serving-cert\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644451 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e37c4bcf-f60b-44fa-b0c3-85eecf0e785b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2c697\" (UID: \"e37c4bcf-f60b-44fa-b0c3-85eecf0e785b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644485 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1e05b4cd-2477-4d69-804f-c8dc59d6da3d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4c92n\" (UID: \"1e05b4cd-2477-4d69-804f-c8dc59d6da3d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644516 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644640 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644830 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644870 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644875 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/efaee247-0579-47df-b29f-a6009d7302c3-audit-dir\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644910 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e37c4bcf-f60b-44fa-b0c3-85eecf0e785b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2c697\" (UID: \"e37c4bcf-f60b-44fa-b0c3-85eecf0e785b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644943 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df9b93ff-b380-4673-9fc2-6c50f0523377-serving-cert\") pod \"openshift-config-operator-7777fb866f-sjpc9\" (UID: \"df9b93ff-b380-4673-9fc2-6c50f0523377\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.644966 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/df9b93ff-b380-4673-9fc2-6c50f0523377-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sjpc9\" (UID: \"df9b93ff-b380-4673-9fc2-6c50f0523377\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.645264 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.645978 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-image-import-ca\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.646136 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-audit\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.646703 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.646933 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-etcd-serving-ca\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.647036 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/efaee247-0579-47df-b29f-a6009d7302c3-node-pullsecrets\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.647627 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.650179 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.652320 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efaee247-0579-47df-b29f-a6009d7302c3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.653042 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efaee247-0579-47df-b29f-a6009d7302c3-etcd-client\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.654260 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.654845 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/efaee247-0579-47df-b29f-a6009d7302c3-encryption-config\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.657283 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-68sbb"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.658046 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.658682 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.658889 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.659270 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-68sbb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.660760 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.661015 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.663175 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.663697 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.666917 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.669495 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-f2jq4"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.671205 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f2jq4" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.672074 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.672947 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.673220 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c9fjz"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.674259 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c9fjz" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.676131 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kntl9"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.676975 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efaee247-0579-47df-b29f-a6009d7302c3-serving-cert\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.677167 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.677908 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kntl9" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.684435 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sjt2h"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.685502 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-sjt2h" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.688747 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.689815 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.689937 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.690864 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7ffdw"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.692180 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.694093 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.694762 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.695088 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.696721 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cxfwb"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.697809 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-p7gb9"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.698755 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-842ws"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.699743 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.699864 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7bj9b"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.700953 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.702000 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.703073 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.704126 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.706577 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cvvxz"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.707705 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kqnqb"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.708838 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ftfkv"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.709912 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k2xpd"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.711061 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-89rk4"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.712251 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.713691 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jwq6x"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.716548 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.716610 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.717805 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c9fjz"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.719017 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.720128 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.721300 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-svrh5"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.722205 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-svrh5" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.722450 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-vrbhw"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.723425 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vrbhw" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.723517 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-68sbb"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.724933 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.726057 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.727106 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4c92n"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.728161 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.729303 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.730708 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.732720 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kfghc"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.734309 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.734770 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.736856 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.747887 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9whj5\" (UniqueName: \"kubernetes.io/projected/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-kube-api-access-9whj5\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.747965 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748005 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1e05b4cd-2477-4d69-804f-c8dc59d6da3d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4c92n\" (UID: \"1e05b4cd-2477-4d69-804f-c8dc59d6da3d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748035 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e37c4bcf-f60b-44fa-b0c3-85eecf0e785b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2c697\" (UID: \"e37c4bcf-f60b-44fa-b0c3-85eecf0e785b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748068 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748115 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748148 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748176 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df9b93ff-b380-4673-9fc2-6c50f0523377-serving-cert\") pod \"openshift-config-operator-7777fb866f-sjpc9\" (UID: \"df9b93ff-b380-4673-9fc2-6c50f0523377\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748201 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/df9b93ff-b380-4673-9fc2-6c50f0523377-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sjpc9\" (UID: \"df9b93ff-b380-4673-9fc2-6c50f0523377\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748229 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e37c4bcf-f60b-44fa-b0c3-85eecf0e785b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2c697\" (UID: \"e37c4bcf-f60b-44fa-b0c3-85eecf0e785b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748256 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748281 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvgx9\" (UniqueName: \"kubernetes.io/projected/e37c4bcf-f60b-44fa-b0c3-85eecf0e785b-kube-api-access-kvgx9\") pod \"cluster-image-registry-operator-dc59b4c8b-2c697\" (UID: \"e37c4bcf-f60b-44fa-b0c3-85eecf0e785b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748313 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-audit-dir\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748341 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748399 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748444 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748472 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e05b4cd-2477-4d69-804f-c8dc59d6da3d-config\") pod \"machine-api-operator-5694c8668f-4c92n\" (UID: \"1e05b4cd-2477-4d69-804f-c8dc59d6da3d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748510 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748538 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbd78\" (UniqueName: \"kubernetes.io/projected/df9b93ff-b380-4673-9fc2-6c50f0523377-kube-api-access-sbd78\") pod \"openshift-config-operator-7777fb866f-sjpc9\" (UID: \"df9b93ff-b380-4673-9fc2-6c50f0523377\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748569 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e37c4bcf-f60b-44fa-b0c3-85eecf0e785b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2c697\" (UID: \"e37c4bcf-f60b-44fa-b0c3-85eecf0e785b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748601 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-audit-policies\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748626 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbvzf\" (UniqueName: \"kubernetes.io/projected/1e05b4cd-2477-4d69-804f-c8dc59d6da3d-kube-api-access-nbvzf\") pod \"machine-api-operator-5694c8668f-4c92n\" (UID: \"1e05b4cd-2477-4d69-804f-c8dc59d6da3d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748666 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748711 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1e05b4cd-2477-4d69-804f-c8dc59d6da3d-images\") pod \"machine-api-operator-5694c8668f-4c92n\" (UID: \"1e05b4cd-2477-4d69-804f-c8dc59d6da3d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.748744 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.750401 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.750660 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e05b4cd-2477-4d69-804f-c8dc59d6da3d-config\") pod \"machine-api-operator-5694c8668f-4c92n\" (UID: \"1e05b4cd-2477-4d69-804f-c8dc59d6da3d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.751489 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.751547 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.751698 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1e05b4cd-2477-4d69-804f-c8dc59d6da3d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4c92n\" (UID: \"1e05b4cd-2477-4d69-804f-c8dc59d6da3d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.752020 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.752161 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.752411 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/df9b93ff-b380-4673-9fc2-6c50f0523377-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sjpc9\" (UID: \"df9b93ff-b380-4673-9fc2-6c50f0523377\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.752414 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.752645 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.752851 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-audit-dir\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.753028 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-audit-policies\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.753668 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1e05b4cd-2477-4d69-804f-c8dc59d6da3d-images\") pod \"machine-api-operator-5694c8668f-4c92n\" (UID: \"1e05b4cd-2477-4d69-804f-c8dc59d6da3d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.753723 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-f2jq4"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.754493 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e37c4bcf-f60b-44fa-b0c3-85eecf0e785b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2c697\" (UID: \"e37c4bcf-f60b-44fa-b0c3-85eecf0e785b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.754530 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.754836 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.755019 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kntl9"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.755561 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.755898 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df9b93ff-b380-4673-9fc2-6c50f0523377-serving-cert\") pod \"openshift-config-operator-7777fb866f-sjpc9\" (UID: \"df9b93ff-b380-4673-9fc2-6c50f0523377\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.756420 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sjt2h"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.756552 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e37c4bcf-f60b-44fa-b0c3-85eecf0e785b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2c697\" (UID: \"e37c4bcf-f60b-44fa-b0c3-85eecf0e785b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.756793 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.757331 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.757705 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.758921 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vrbhw"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.759205 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.760222 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-842ws"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.761468 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tnprh"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.762421 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tnprh" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.762508 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tnprh"] Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.773958 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.794591 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.815382 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.834409 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.853930 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.873688 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.893250 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.913530 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.933753 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.953986 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.973717 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 11 18:02:55 crc kubenswrapper[4877]: I1211 18:02:55.994404 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.014925 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.034099 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.054294 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.074129 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.094095 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.115099 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.134162 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.154194 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.173965 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.195331 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.213996 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.214316 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.214354 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.214555 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.233849 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.255064 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.274876 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.314799 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.334180 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.353812 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.380920 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.394171 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.450585 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvd7m\" (UniqueName: \"kubernetes.io/projected/efaee247-0579-47df-b29f-a6009d7302c3-kube-api-access-cvd7m\") pod \"apiserver-76f77b778f-7ffdw\" (UID: \"efaee247-0579-47df-b29f-a6009d7302c3\") " pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.454454 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455071 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fff3932-5b5f-49af-a652-9030dd8f6139-ca-trust-extracted\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455098 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8bc79c0-914b-4875-a155-01d87a497f69-serving-cert\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455118 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fff3932-5b5f-49af-a652-9030dd8f6139-installation-pull-secrets\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455140 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bc6c140-29ac-419b-b97c-7d43b92b2cc1-config\") pod \"console-operator-58897d9998-p7gb9\" (UID: \"6bc6c140-29ac-419b-b97c-7d43b92b2cc1\") " pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455159 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7djd\" (UniqueName: \"kubernetes.io/projected/6bc6c140-29ac-419b-b97c-7d43b92b2cc1-kube-api-access-r7djd\") pod \"console-operator-58897d9998-p7gb9\" (UID: \"6bc6c140-29ac-419b-b97c-7d43b92b2cc1\") " pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455176 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8fsk\" (UniqueName: \"kubernetes.io/projected/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-kube-api-access-w8fsk\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455366 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zcgf\" (UniqueName: \"kubernetes.io/projected/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-kube-api-access-9zcgf\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455522 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6bc6c140-29ac-419b-b97c-7d43b92b2cc1-trusted-ca\") pod \"console-operator-58897d9998-p7gb9\" (UID: \"6bc6c140-29ac-419b-b97c-7d43b92b2cc1\") " pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455558 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/884770ef-a741-47ce-bdde-79844ff9f886-config\") pod \"route-controller-manager-6576b87f9c-mgg8t\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455593 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-oauth-config\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455650 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-registry-tls\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455696 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-serving-cert\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455758 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-service-ca\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455850 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8bc79c0-914b-4875-a155-01d87a497f69-service-ca-bundle\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455890 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-config\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455944 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/884770ef-a741-47ce-bdde-79844ff9f886-serving-cert\") pod \"route-controller-manager-6576b87f9c-mgg8t\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.455981 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-audit-policies\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456024 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx4vd\" (UniqueName: \"kubernetes.io/projected/884770ef-a741-47ce-bdde-79844ff9f886-kube-api-access-qx4vd\") pod \"route-controller-manager-6576b87f9c-mgg8t\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456092 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456139 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-etcd-client\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456193 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/92aca579-224f-4b7e-9150-45ae61d15ca6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ft6df\" (UID: \"92aca579-224f-4b7e-9150-45ae61d15ca6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456234 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-trusted-ca-bundle\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456281 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456354 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-encryption-config\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456509 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a2a6190e-6fb8-4f61-99e1-63972f85df6d-auth-proxy-config\") pod \"machine-approver-56656f9798-vxx99\" (UID: \"a2a6190e-6fb8-4f61-99e1-63972f85df6d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456553 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ace78071-47e6-4240-9b3f-e677ac9c360d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2gt7h\" (UID: \"ace78071-47e6-4240-9b3f-e677ac9c360d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h" Dec 11 18:02:56 crc kubenswrapper[4877]: E1211 18:02:56.456570 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:56.956548226 +0000 UTC m=+137.982792270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456638 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a6190e-6fb8-4f61-99e1-63972f85df6d-config\") pod \"machine-approver-56656f9798-vxx99\" (UID: \"a2a6190e-6fb8-4f61-99e1-63972f85df6d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456711 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8bc79c0-914b-4875-a155-01d87a497f69-config\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456738 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-bound-sa-token\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456785 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bc6c140-29ac-419b-b97c-7d43b92b2cc1-serving-cert\") pod \"console-operator-58897d9998-p7gb9\" (UID: \"6bc6c140-29ac-419b-b97c-7d43b92b2cc1\") " pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456815 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmvx4\" (UniqueName: \"kubernetes.io/projected/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-kube-api-access-qmvx4\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456844 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8bc79c0-914b-4875-a155-01d87a497f69-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.456975 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-client-ca\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457083 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fff3932-5b5f-49af-a652-9030dd8f6139-registry-certificates\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457132 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ace78071-47e6-4240-9b3f-e677ac9c360d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2gt7h\" (UID: \"ace78071-47e6-4240-9b3f-e677ac9c360d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457178 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjdnh\" (UniqueName: \"kubernetes.io/projected/92aca579-224f-4b7e-9150-45ae61d15ca6-kube-api-access-tjdnh\") pod \"cluster-samples-operator-665b6dd947-ft6df\" (UID: \"92aca579-224f-4b7e-9150-45ae61d15ca6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457227 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf6j7\" (UniqueName: \"kubernetes.io/projected/d8bc79c0-914b-4875-a155-01d87a497f69-kube-api-access-tf6j7\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457287 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/884770ef-a741-47ce-bdde-79844ff9f886-client-ca\") pod \"route-controller-manager-6576b87f9c-mgg8t\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457334 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fff3932-5b5f-49af-a652-9030dd8f6139-trusted-ca\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457366 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpk96\" (UniqueName: \"kubernetes.io/projected/ace78071-47e6-4240-9b3f-e677ac9c360d-kube-api-access-jpk96\") pod \"openshift-controller-manager-operator-756b6f6bc6-2gt7h\" (UID: \"ace78071-47e6-4240-9b3f-e677ac9c360d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457477 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a2a6190e-6fb8-4f61-99e1-63972f85df6d-machine-approver-tls\") pod \"machine-approver-56656f9798-vxx99\" (UID: \"a2a6190e-6fb8-4f61-99e1-63972f85df6d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457520 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-config\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457554 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457588 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-serving-cert\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457698 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-oauth-serving-cert\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457744 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457777 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-audit-dir\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457821 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s552c\" (UniqueName: \"kubernetes.io/projected/1cae51ed-b80c-4017-9b9f-1485a809f145-kube-api-access-s552c\") pod \"downloads-7954f5f757-cvvxz\" (UID: \"1cae51ed-b80c-4017-9b9f-1485a809f145\") " pod="openshift-console/downloads-7954f5f757-cvvxz" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457874 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgrbw\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-kube-api-access-vgrbw\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457925 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc924\" (UniqueName: \"kubernetes.io/projected/a2a6190e-6fb8-4f61-99e1-63972f85df6d-kube-api-access-vc924\") pod \"machine-approver-56656f9798-vxx99\" (UID: \"a2a6190e-6fb8-4f61-99e1-63972f85df6d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.457969 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-serving-cert\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.474761 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.494765 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.514406 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.534417 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.554780 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.558775 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:56 crc kubenswrapper[4877]: E1211 18:02:56.558992 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.058963638 +0000 UTC m=+138.085207672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.559150 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8bc79c0-914b-4875-a155-01d87a497f69-service-ca-bundle\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.559211 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-config\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.559280 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bad86dd9-d430-4692-9caf-5a7218fd02b7-metrics-tls\") pod \"ingress-operator-5b745b69d9-4pjpv\" (UID: \"bad86dd9-d430-4692-9caf-5a7218fd02b7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.559317 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-registration-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.559353 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24d8476c-b8ef-4336-b6b7-62a4faabb6a9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5n5hp\" (UID: \"24d8476c-b8ef-4336-b6b7-62a4faabb6a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.559446 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.559533 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx4vd\" (UniqueName: \"kubernetes.io/projected/884770ef-a741-47ce-bdde-79844ff9f886-kube-api-access-qx4vd\") pod \"route-controller-manager-6576b87f9c-mgg8t\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.559618 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99c5646a-3905-4010-9538-53090f0160f5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p68fz\" (UID: \"99c5646a-3905-4010-9538-53090f0160f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.559685 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-trusted-ca-bundle\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.559743 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-etcd-client\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.559807 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa0265e5-9837-4f97-891a-703b0e440df3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k2xpd\" (UID: \"fa0265e5-9837-4f97-891a-703b0e440df3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:02:56 crc kubenswrapper[4877]: E1211 18:02:56.559827 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.059818481 +0000 UTC m=+138.086062525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.559868 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8bc79c0-914b-4875-a155-01d87a497f69-config\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.559922 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b5f044-91ef-40f8-93f3-bfe4bb970cc6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gxtrh\" (UID: \"34b5f044-91ef-40f8-93f3-bfe4bb970cc6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.560008 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmvx4\" (UniqueName: \"kubernetes.io/projected/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-kube-api-access-qmvx4\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.560231 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8bc79c0-914b-4875-a155-01d87a497f69-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.560426 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fff3932-5b5f-49af-a652-9030dd8f6139-registry-certificates\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.560488 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-client-ca\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.560538 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/217c0bda-35b4-4332-83de-9210f6906544-metrics-tls\") pod \"dns-default-vrbhw\" (UID: \"217c0bda-35b4-4332-83de-9210f6906544\") " pod="openshift-dns/dns-default-vrbhw" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.560596 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b21482e-04d8-493d-a274-670ce4961923-config-volume\") pod \"collect-profiles-29424600-hpkv2\" (UID: \"2b21482e-04d8-493d-a274-670ce4961923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.560654 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6a67db71-72a9-4d82-94e7-673e78b11dc6-profile-collector-cert\") pod \"catalog-operator-68c6474976-lpcf4\" (UID: \"6a67db71-72a9-4d82-94e7-673e78b11dc6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.560708 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ace78071-47e6-4240-9b3f-e677ac9c360d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2gt7h\" (UID: \"ace78071-47e6-4240-9b3f-e677ac9c360d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.560766 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjdnh\" (UniqueName: \"kubernetes.io/projected/92aca579-224f-4b7e-9150-45ae61d15ca6-kube-api-access-tjdnh\") pod \"cluster-samples-operator-665b6dd947-ft6df\" (UID: \"92aca579-224f-4b7e-9150-45ae61d15ca6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.560821 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf6j7\" (UniqueName: \"kubernetes.io/projected/d8bc79c0-914b-4875-a155-01d87a497f69-kube-api-access-tf6j7\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.560917 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-config\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.560966 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fff3932-5b5f-49af-a652-9030dd8f6139-trusted-ca\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561027 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpk96\" (UniqueName: \"kubernetes.io/projected/ace78071-47e6-4240-9b3f-e677ac9c360d-kube-api-access-jpk96\") pod \"openshift-controller-manager-operator-756b6f6bc6-2gt7h\" (UID: \"ace78071-47e6-4240-9b3f-e677ac9c360d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561087 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p9dc\" (UniqueName: \"kubernetes.io/projected/b7fda896-ae7e-403b-bab7-365140566954-kube-api-access-4p9dc\") pod \"multus-admission-controller-857f4d67dd-c9fjz\" (UID: \"b7fda896-ae7e-403b-bab7-365140566954\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c9fjz" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561144 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/63141648-7cad-41e1-96e7-38c0305347b0-default-certificate\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561194 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/59572212-89da-4750-a504-ded844d647b9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hnn72\" (UID: \"59572212-89da-4750-a504-ded844d647b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561248 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a2a6190e-6fb8-4f61-99e1-63972f85df6d-machine-approver-tls\") pod \"machine-approver-56656f9798-vxx99\" (UID: \"a2a6190e-6fb8-4f61-99e1-63972f85df6d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561305 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-config\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561357 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f72d4ca-090e-420f-894c-d4571bdab1a3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r5zp4\" (UID: \"2f72d4ca-090e-420f-894c-d4571bdab1a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561459 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/98cc800c-b8b6-49f9-94c0-42bb0c22eb76-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kntl9\" (UID: \"98cc800c-b8b6-49f9-94c0-42bb0c22eb76\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kntl9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561475 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8bc79c0-914b-4875-a155-01d87a497f69-config\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561517 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/74678aa2-d0e1-4db8-854b-4e545859f4b1-cert\") pod \"ingress-canary-tnprh\" (UID: \"74678aa2-d0e1-4db8-854b-4e545859f4b1\") " pod="openshift-ingress-canary/ingress-canary-tnprh" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561585 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d8476c-b8ef-4336-b6b7-62a4faabb6a9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5n5hp\" (UID: \"24d8476c-b8ef-4336-b6b7-62a4faabb6a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561628 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ace78071-47e6-4240-9b3f-e677ac9c360d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2gt7h\" (UID: \"ace78071-47e6-4240-9b3f-e677ac9c360d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561655 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561716 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bad86dd9-d430-4692-9caf-5a7218fd02b7-trusted-ca\") pod \"ingress-operator-5b745b69d9-4pjpv\" (UID: \"bad86dd9-d430-4692-9caf-5a7218fd02b7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561767 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24d8476c-b8ef-4336-b6b7-62a4faabb6a9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5n5hp\" (UID: \"24d8476c-b8ef-4336-b6b7-62a4faabb6a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561800 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8bc79c0-914b-4875-a155-01d87a497f69-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561854 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4tzv\" (UniqueName: \"kubernetes.io/projected/bad86dd9-d430-4692-9caf-5a7218fd02b7-kube-api-access-c4tzv\") pod \"ingress-operator-5b745b69d9-4pjpv\" (UID: \"bad86dd9-d430-4692-9caf-5a7218fd02b7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.561917 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8cb7b6c-98f4-4a07-97e6-53e371e4cac3-config\") pod \"kube-controller-manager-operator-78b949d7b-4jvkl\" (UID: \"a8cb7b6c-98f4-4a07-97e6-53e371e4cac3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.562001 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.562048 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-audit-dir\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.562099 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8c92f553-9f22-485e-80a4-86b223a70ef7-signing-cabundle\") pod \"service-ca-9c57cc56f-sjt2h\" (UID: \"8c92f553-9f22-485e-80a4-86b223a70ef7\") " pod="openshift-service-ca/service-ca-9c57cc56f-sjt2h" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.562145 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fff3932-5b5f-49af-a652-9030dd8f6139-registry-certificates\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.562151 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc924\" (UniqueName: \"kubernetes.io/projected/a2a6190e-6fb8-4f61-99e1-63972f85df6d-kube-api-access-vc924\") pod \"machine-approver-56656f9798-vxx99\" (UID: \"a2a6190e-6fb8-4f61-99e1-63972f85df6d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.562402 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-client-ca\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.563500 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.563496 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-config\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.563595 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fff3932-5b5f-49af-a652-9030dd8f6139-trusted-ca\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.563616 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-audit-dir\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.563659 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-plugins-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.563732 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8bc79c0-914b-4875-a155-01d87a497f69-serving-cert\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.563770 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c909d56d-484a-4d56-a279-1d464fe45dc8-metrics-tls\") pod \"dns-operator-744455d44c-kfghc\" (UID: \"c909d56d-484a-4d56-a279-1d464fe45dc8\") " pod="openshift-dns-operator/dns-operator-744455d44c-kfghc" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.563879 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-csi-data-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.563912 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8c61f40-7215-401b-9e07-74b95ed041cc-serving-cert\") pod \"service-ca-operator-777779d784-68sbb\" (UID: \"b8c61f40-7215-401b-9e07-74b95ed041cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-68sbb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.563991 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fff3932-5b5f-49af-a652-9030dd8f6139-installation-pull-secrets\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564040 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tftv8\" (UniqueName: \"kubernetes.io/projected/34b5f044-91ef-40f8-93f3-bfe4bb970cc6-kube-api-access-tftv8\") pod \"openshift-apiserver-operator-796bbdcf4f-gxtrh\" (UID: \"34b5f044-91ef-40f8-93f3-bfe4bb970cc6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564069 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63141648-7cad-41e1-96e7-38c0305347b0-service-ca-bundle\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564096 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/99c5646a-3905-4010-9538-53090f0160f5-proxy-tls\") pod \"machine-config-controller-84d6567774-p68fz\" (UID: \"99c5646a-3905-4010-9538-53090f0160f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564140 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mkxz\" (UniqueName: \"kubernetes.io/projected/a9eec3bb-1807-465b-b272-cd2767e499d5-kube-api-access-4mkxz\") pod \"kube-storage-version-migrator-operator-b67b599dd-4s29q\" (UID: \"a9eec3bb-1807-465b-b272-cd2767e499d5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564166 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk4p4\" (UniqueName: \"kubernetes.io/projected/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-kube-api-access-dk4p4\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564190 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tpc9\" (UniqueName: \"kubernetes.io/projected/99c5646a-3905-4010-9538-53090f0160f5-kube-api-access-4tpc9\") pod \"machine-config-controller-84d6567774-p68fz\" (UID: \"99c5646a-3905-4010-9538-53090f0160f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564219 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f72d4ca-090e-420f-894c-d4571bdab1a3-config\") pod \"kube-apiserver-operator-766d6c64bb-r5zp4\" (UID: \"2f72d4ca-090e-420f-894c-d4571bdab1a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564244 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8cb7b6c-98f4-4a07-97e6-53e371e4cac3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4jvkl\" (UID: \"a8cb7b6c-98f4-4a07-97e6-53e371e4cac3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564280 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsbjz\" (UniqueName: \"kubernetes.io/projected/217c0bda-35b4-4332-83de-9210f6906544-kube-api-access-fsbjz\") pod \"dns-default-vrbhw\" (UID: \"217c0bda-35b4-4332-83de-9210f6906544\") " pod="openshift-dns/dns-default-vrbhw" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564310 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzwb8\" (UniqueName: \"kubernetes.io/projected/13bee5ff-144b-42e3-b5ae-01ed760c979b-kube-api-access-mzwb8\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564340 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zcgf\" (UniqueName: \"kubernetes.io/projected/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-kube-api-access-9zcgf\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564360 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8c61f40-7215-401b-9e07-74b95ed041cc-config\") pod \"service-ca-operator-777779d784-68sbb\" (UID: \"b8c61f40-7215-401b-9e07-74b95ed041cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-68sbb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564422 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6bc6c140-29ac-419b-b97c-7d43b92b2cc1-trusted-ca\") pod \"console-operator-58897d9998-p7gb9\" (UID: \"6bc6c140-29ac-419b-b97c-7d43b92b2cc1\") " pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564453 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-mountpoint-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564486 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b21482e-04d8-493d-a274-670ce4961923-secret-volume\") pod \"collect-profiles-29424600-hpkv2\" (UID: \"2b21482e-04d8-493d-a274-670ce4961923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564524 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6a67db71-72a9-4d82-94e7-673e78b11dc6-srv-cert\") pod \"catalog-operator-68c6474976-lpcf4\" (UID: \"6a67db71-72a9-4d82-94e7-673e78b11dc6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564573 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-serving-cert\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564683 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/40febc91-7d7a-4130-acbb-6c8c434033df-tmpfs\") pod \"packageserver-d55dfcdfc-gx8gb\" (UID: \"40febc91-7d7a-4130-acbb-6c8c434033df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564697 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564734 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jxrz\" (UniqueName: \"kubernetes.io/projected/43bf03d3-95e6-47a5-8384-6f7cd56f2cc0-kube-api-access-8jxrz\") pod \"machine-config-operator-74547568cd-tbhjv\" (UID: \"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564705 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8bc79c0-914b-4875-a155-01d87a497f69-service-ca-bundle\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.564805 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-trusted-ca-bundle\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.565057 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/884770ef-a741-47ce-bdde-79844ff9f886-serving-cert\") pod \"route-controller-manager-6576b87f9c-mgg8t\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.565134 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsfkr\" (UniqueName: \"kubernetes.io/projected/63141648-7cad-41e1-96e7-38c0305347b0-kube-api-access-zsfkr\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.565208 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/13bee5ff-144b-42e3-b5ae-01ed760c979b-etcd-service-ca\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.565232 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/40febc91-7d7a-4130-acbb-6c8c434033df-webhook-cert\") pod \"packageserver-d55dfcdfc-gx8gb\" (UID: \"40febc91-7d7a-4130-acbb-6c8c434033df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.565238 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-etcd-client\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.565260 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13bee5ff-144b-42e3-b5ae-01ed760c979b-config\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.565367 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-audit-policies\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.565468 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/92aca579-224f-4b7e-9150-45ae61d15ca6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ft6df\" (UID: \"92aca579-224f-4b7e-9150-45ae61d15ca6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.565529 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfbnv\" (UniqueName: \"kubernetes.io/projected/40febc91-7d7a-4130-acbb-6c8c434033df-kube-api-access-jfbnv\") pod \"packageserver-d55dfcdfc-gx8gb\" (UID: \"40febc91-7d7a-4130-acbb-6c8c434033df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.565551 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9eec3bb-1807-465b-b272-cd2767e499d5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4s29q\" (UID: \"a9eec3bb-1807-465b-b272-cd2767e499d5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.565575 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63141648-7cad-41e1-96e7-38c0305347b0-metrics-certs\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.565705 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/59572212-89da-4750-a504-ded844d647b9-srv-cert\") pod \"olm-operator-6b444d44fb-hnn72\" (UID: \"59572212-89da-4750-a504-ded844d647b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.565945 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a2a6190e-6fb8-4f61-99e1-63972f85df6d-auth-proxy-config\") pod \"machine-approver-56656f9798-vxx99\" (UID: \"a2a6190e-6fb8-4f61-99e1-63972f85df6d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.566029 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ace78071-47e6-4240-9b3f-e677ac9c360d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2gt7h\" (UID: \"ace78071-47e6-4240-9b3f-e677ac9c360d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.566108 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.566138 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-encryption-config\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.566175 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a6190e-6fb8-4f61-99e1-63972f85df6d-config\") pod \"machine-approver-56656f9798-vxx99\" (UID: \"a2a6190e-6fb8-4f61-99e1-63972f85df6d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.566203 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b7fda896-ae7e-403b-bab7-365140566954-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c9fjz\" (UID: \"b7fda896-ae7e-403b-bab7-365140566954\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c9fjz" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.566221 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a2a6190e-6fb8-4f61-99e1-63972f85df6d-machine-approver-tls\") pod \"machine-approver-56656f9798-vxx99\" (UID: \"a2a6190e-6fb8-4f61-99e1-63972f85df6d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.566238 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmtdf\" (UniqueName: \"kubernetes.io/projected/59572212-89da-4750-a504-ded844d647b9-kube-api-access-qmtdf\") pod \"olm-operator-6b444d44fb-hnn72\" (UID: \"59572212-89da-4750-a504-ded844d647b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.566266 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d75bx\" (UniqueName: \"kubernetes.io/projected/2b21482e-04d8-493d-a274-670ce4961923-kube-api-access-d75bx\") pod \"collect-profiles-29424600-hpkv2\" (UID: \"2b21482e-04d8-493d-a274-670ce4961923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.566992 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6bc6c140-29ac-419b-b97c-7d43b92b2cc1-trusted-ca\") pod \"console-operator-58897d9998-p7gb9\" (UID: \"6bc6c140-29ac-419b-b97c-7d43b92b2cc1\") " pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.567225 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a6190e-6fb8-4f61-99e1-63972f85df6d-config\") pod \"machine-approver-56656f9798-vxx99\" (UID: \"a2a6190e-6fb8-4f61-99e1-63972f85df6d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.567326 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8bc79c0-914b-4875-a155-01d87a497f69-serving-cert\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.567336 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-audit-policies\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.567364 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-bound-sa-token\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.567418 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bc6c140-29ac-419b-b97c-7d43b92b2cc1-serving-cert\") pod \"console-operator-58897d9998-p7gb9\" (UID: \"6bc6c140-29ac-419b-b97c-7d43b92b2cc1\") " pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.567440 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13bee5ff-144b-42e3-b5ae-01ed760c979b-serving-cert\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.567463 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-socket-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.568107 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a2a6190e-6fb8-4f61-99e1-63972f85df6d-auth-proxy-config\") pod \"machine-approver-56656f9798-vxx99\" (UID: \"a2a6190e-6fb8-4f61-99e1-63972f85df6d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.567499 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfs5h\" (UniqueName: \"kubernetes.io/projected/b8c61f40-7215-401b-9e07-74b95ed041cc-kube-api-access-pfs5h\") pod \"service-ca-operator-777779d784-68sbb\" (UID: \"b8c61f40-7215-401b-9e07-74b95ed041cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-68sbb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.568298 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f72d4ca-090e-420f-894c-d4571bdab1a3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r5zp4\" (UID: \"2f72d4ca-090e-420f-894c-d4571bdab1a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.568301 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-serving-cert\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.568353 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/884770ef-a741-47ce-bdde-79844ff9f886-client-ca\") pod \"route-controller-manager-6576b87f9c-mgg8t\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.568603 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.568634 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b5f044-91ef-40f8-93f3-bfe4bb970cc6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gxtrh\" (UID: \"34b5f044-91ef-40f8-93f3-bfe4bb970cc6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.568752 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gffzz\" (UniqueName: \"kubernetes.io/projected/fa0265e5-9837-4f97-891a-703b0e440df3-kube-api-access-gffzz\") pod \"marketplace-operator-79b997595-k2xpd\" (UID: \"fa0265e5-9837-4f97-891a-703b0e440df3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.568815 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlb8w\" (UniqueName: \"kubernetes.io/projected/6a67db71-72a9-4d82-94e7-673e78b11dc6-kube-api-access-mlb8w\") pod \"catalog-operator-68c6474976-lpcf4\" (UID: \"6a67db71-72a9-4d82-94e7-673e78b11dc6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.568861 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa0265e5-9837-4f97-891a-703b0e440df3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k2xpd\" (UID: \"fa0265e5-9837-4f97-891a-703b0e440df3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.568909 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1dc1cc46-73f4-4e0b-a012-d9b9599ebc12-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4rglc\" (UID: \"1dc1cc46-73f4-4e0b-a012-d9b9599ebc12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.568965 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8cb7b6c-98f4-4a07-97e6-53e371e4cac3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4jvkl\" (UID: \"a8cb7b6c-98f4-4a07-97e6-53e371e4cac3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569002 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e513abf7-304c-47b9-8940-8f17f618e491-certs\") pod \"machine-config-server-svrh5\" (UID: \"e513abf7-304c-47b9-8940-8f17f618e491\") " pod="openshift-machine-config-operator/machine-config-server-svrh5" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569061 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-serving-cert\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569103 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/13bee5ff-144b-42e3-b5ae-01ed760c979b-etcd-ca\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569151 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13bee5ff-144b-42e3-b5ae-01ed760c979b-etcd-client\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569195 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/217c0bda-35b4-4332-83de-9210f6906544-config-volume\") pod \"dns-default-vrbhw\" (UID: \"217c0bda-35b4-4332-83de-9210f6906544\") " pod="openshift-dns/dns-default-vrbhw" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569330 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-oauth-serving-cert\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569370 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/40febc91-7d7a-4130-acbb-6c8c434033df-apiservice-cert\") pod \"packageserver-d55dfcdfc-gx8gb\" (UID: \"40febc91-7d7a-4130-acbb-6c8c434033df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569415 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/884770ef-a741-47ce-bdde-79844ff9f886-client-ca\") pod \"route-controller-manager-6576b87f9c-mgg8t\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569416 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgrbw\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-kube-api-access-vgrbw\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569471 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-serving-cert\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569494 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s552c\" (UniqueName: \"kubernetes.io/projected/1cae51ed-b80c-4017-9b9f-1485a809f145-kube-api-access-s552c\") pod \"downloads-7954f5f757-cvvxz\" (UID: \"1cae51ed-b80c-4017-9b9f-1485a809f145\") " pod="openshift-console/downloads-7954f5f757-cvvxz" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569514 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9h9g\" (UniqueName: \"kubernetes.io/projected/d4d0a314-0249-46d9-9bdc-af9b7e063110-kube-api-access-q9h9g\") pod \"migrator-59844c95c7-f2jq4\" (UID: \"d4d0a314-0249-46d9-9bdc-af9b7e063110\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f2jq4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569537 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fff3932-5b5f-49af-a652-9030dd8f6139-ca-trust-extracted\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569554 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/63141648-7cad-41e1-96e7-38c0305347b0-stats-auth\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569570 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85nh9\" (UniqueName: \"kubernetes.io/projected/74678aa2-d0e1-4db8-854b-4e545859f4b1-kube-api-access-85nh9\") pod \"ingress-canary-tnprh\" (UID: \"74678aa2-d0e1-4db8-854b-4e545859f4b1\") " pod="openshift-ingress-canary/ingress-canary-tnprh" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569594 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bc6c140-29ac-419b-b97c-7d43b92b2cc1-config\") pod \"console-operator-58897d9998-p7gb9\" (UID: \"6bc6c140-29ac-419b-b97c-7d43b92b2cc1\") " pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569610 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7djd\" (UniqueName: \"kubernetes.io/projected/6bc6c140-29ac-419b-b97c-7d43b92b2cc1-kube-api-access-r7djd\") pod \"console-operator-58897d9998-p7gb9\" (UID: \"6bc6c140-29ac-419b-b97c-7d43b92b2cc1\") " pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569635 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8fsk\" (UniqueName: \"kubernetes.io/projected/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-kube-api-access-w8fsk\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569654 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x76g2\" (UniqueName: \"kubernetes.io/projected/e513abf7-304c-47b9-8940-8f17f618e491-kube-api-access-x76g2\") pod \"machine-config-server-svrh5\" (UID: \"e513abf7-304c-47b9-8940-8f17f618e491\") " pod="openshift-machine-config-operator/machine-config-server-svrh5" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569671 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bad86dd9-d430-4692-9caf-5a7218fd02b7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4pjpv\" (UID: \"bad86dd9-d430-4692-9caf-5a7218fd02b7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569723 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2szdq\" (UniqueName: \"kubernetes.io/projected/8c92f553-9f22-485e-80a4-86b223a70ef7-kube-api-access-2szdq\") pod \"service-ca-9c57cc56f-sjt2h\" (UID: \"8c92f553-9f22-485e-80a4-86b223a70ef7\") " pod="openshift-service-ca/service-ca-9c57cc56f-sjt2h" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569742 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43bf03d3-95e6-47a5-8384-6f7cd56f2cc0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tbhjv\" (UID: \"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569766 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9eec3bb-1807-465b-b272-cd2767e499d5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4s29q\" (UID: \"a9eec3bb-1807-465b-b272-cd2767e499d5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569785 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43bf03d3-95e6-47a5-8384-6f7cd56f2cc0-proxy-tls\") pod \"machine-config-operator-74547568cd-tbhjv\" (UID: \"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569834 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/884770ef-a741-47ce-bdde-79844ff9f886-config\") pod \"route-controller-manager-6576b87f9c-mgg8t\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569859 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-oauth-config\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569882 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5s7l\" (UniqueName: \"kubernetes.io/projected/98cc800c-b8b6-49f9-94c0-42bb0c22eb76-kube-api-access-c5s7l\") pod \"control-plane-machine-set-operator-78cbb6b69f-kntl9\" (UID: \"98cc800c-b8b6-49f9-94c0-42bb0c22eb76\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kntl9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.569926 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8c92f553-9f22-485e-80a4-86b223a70ef7-signing-key\") pod \"service-ca-9c57cc56f-sjt2h\" (UID: \"8c92f553-9f22-485e-80a4-86b223a70ef7\") " pod="openshift-service-ca/service-ca-9c57cc56f-sjt2h" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.570042 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-registry-tls\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.570062 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n46b9\" (UniqueName: \"kubernetes.io/projected/1dc1cc46-73f4-4e0b-a012-d9b9599ebc12-kube-api-access-n46b9\") pod \"package-server-manager-789f6589d5-4rglc\" (UID: \"1dc1cc46-73f4-4e0b-a012-d9b9599ebc12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.570080 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs82p\" (UniqueName: \"kubernetes.io/projected/c909d56d-484a-4d56-a279-1d464fe45dc8-kube-api-access-fs82p\") pod \"dns-operator-744455d44c-kfghc\" (UID: \"c909d56d-484a-4d56-a279-1d464fe45dc8\") " pod="openshift-dns-operator/dns-operator-744455d44c-kfghc" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.570096 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/43bf03d3-95e6-47a5-8384-6f7cd56f2cc0-images\") pod \"machine-config-operator-74547568cd-tbhjv\" (UID: \"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.570118 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-service-ca\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.570136 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e513abf7-304c-47b9-8940-8f17f618e491-node-bootstrap-token\") pod \"machine-config-server-svrh5\" (UID: \"e513abf7-304c-47b9-8940-8f17f618e491\") " pod="openshift-machine-config-operator/machine-config-server-svrh5" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.570157 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-oauth-serving-cert\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.570273 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fff3932-5b5f-49af-a652-9030dd8f6139-ca-trust-extracted\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.571085 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bc6c140-29ac-419b-b97c-7d43b92b2cc1-config\") pod \"console-operator-58897d9998-p7gb9\" (UID: \"6bc6c140-29ac-419b-b97c-7d43b92b2cc1\") " pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.571460 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-encryption-config\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.571558 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ace78071-47e6-4240-9b3f-e677ac9c360d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2gt7h\" (UID: \"ace78071-47e6-4240-9b3f-e677ac9c360d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.571638 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/884770ef-a741-47ce-bdde-79844ff9f886-config\") pod \"route-controller-manager-6576b87f9c-mgg8t\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.571868 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/884770ef-a741-47ce-bdde-79844ff9f886-serving-cert\") pod \"route-controller-manager-6576b87f9c-mgg8t\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.571877 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-service-ca\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.571887 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fff3932-5b5f-49af-a652-9030dd8f6139-installation-pull-secrets\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.572941 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bc6c140-29ac-419b-b97c-7d43b92b2cc1-serving-cert\") pod \"console-operator-58897d9998-p7gb9\" (UID: \"6bc6c140-29ac-419b-b97c-7d43b92b2cc1\") " pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.573169 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/92aca579-224f-4b7e-9150-45ae61d15ca6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ft6df\" (UID: \"92aca579-224f-4b7e-9150-45ae61d15ca6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.573884 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.578670 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-serving-cert\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.578965 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-oauth-config\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.579726 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-registry-tls\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.580772 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-serving-cert\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.595240 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.614555 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.634497 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.647078 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.654485 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.671726 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:56 crc kubenswrapper[4877]: E1211 18:02:56.671921 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.17188458 +0000 UTC m=+138.198128664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.671999 4877 request.go:700] Waited for 1.016635357s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0 Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672026 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672076 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99c5646a-3905-4010-9538-53090f0160f5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p68fz\" (UID: \"99c5646a-3905-4010-9538-53090f0160f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672101 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa0265e5-9837-4f97-891a-703b0e440df3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k2xpd\" (UID: \"fa0265e5-9837-4f97-891a-703b0e440df3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672135 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b5f044-91ef-40f8-93f3-bfe4bb970cc6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gxtrh\" (UID: \"34b5f044-91ef-40f8-93f3-bfe4bb970cc6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672179 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/217c0bda-35b4-4332-83de-9210f6906544-metrics-tls\") pod \"dns-default-vrbhw\" (UID: \"217c0bda-35b4-4332-83de-9210f6906544\") " pod="openshift-dns/dns-default-vrbhw" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672213 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b21482e-04d8-493d-a274-670ce4961923-config-volume\") pod \"collect-profiles-29424600-hpkv2\" (UID: \"2b21482e-04d8-493d-a274-670ce4961923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672273 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6a67db71-72a9-4d82-94e7-673e78b11dc6-profile-collector-cert\") pod \"catalog-operator-68c6474976-lpcf4\" (UID: \"6a67db71-72a9-4d82-94e7-673e78b11dc6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672364 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p9dc\" (UniqueName: \"kubernetes.io/projected/b7fda896-ae7e-403b-bab7-365140566954-kube-api-access-4p9dc\") pod \"multus-admission-controller-857f4d67dd-c9fjz\" (UID: \"b7fda896-ae7e-403b-bab7-365140566954\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c9fjz" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672411 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/63141648-7cad-41e1-96e7-38c0305347b0-default-certificate\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672433 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/59572212-89da-4750-a504-ded844d647b9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hnn72\" (UID: \"59572212-89da-4750-a504-ded844d647b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672460 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f72d4ca-090e-420f-894c-d4571bdab1a3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r5zp4\" (UID: \"2f72d4ca-090e-420f-894c-d4571bdab1a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672485 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/98cc800c-b8b6-49f9-94c0-42bb0c22eb76-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kntl9\" (UID: \"98cc800c-b8b6-49f9-94c0-42bb0c22eb76\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kntl9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672510 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/74678aa2-d0e1-4db8-854b-4e545859f4b1-cert\") pod \"ingress-canary-tnprh\" (UID: \"74678aa2-d0e1-4db8-854b-4e545859f4b1\") " pod="openshift-ingress-canary/ingress-canary-tnprh" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672532 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d8476c-b8ef-4336-b6b7-62a4faabb6a9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5n5hp\" (UID: \"24d8476c-b8ef-4336-b6b7-62a4faabb6a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672582 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bad86dd9-d430-4692-9caf-5a7218fd02b7-trusted-ca\") pod \"ingress-operator-5b745b69d9-4pjpv\" (UID: \"bad86dd9-d430-4692-9caf-5a7218fd02b7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672613 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24d8476c-b8ef-4336-b6b7-62a4faabb6a9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5n5hp\" (UID: \"24d8476c-b8ef-4336-b6b7-62a4faabb6a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp" Dec 11 18:02:56 crc kubenswrapper[4877]: E1211 18:02:56.672656 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.1726296 +0000 UTC m=+138.198873634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672717 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4tzv\" (UniqueName: \"kubernetes.io/projected/bad86dd9-d430-4692-9caf-5a7218fd02b7-kube-api-access-c4tzv\") pod \"ingress-operator-5b745b69d9-4pjpv\" (UID: \"bad86dd9-d430-4692-9caf-5a7218fd02b7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672757 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8cb7b6c-98f4-4a07-97e6-53e371e4cac3-config\") pod \"kube-controller-manager-operator-78b949d7b-4jvkl\" (UID: \"a8cb7b6c-98f4-4a07-97e6-53e371e4cac3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672786 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8c92f553-9f22-485e-80a4-86b223a70ef7-signing-cabundle\") pod \"service-ca-9c57cc56f-sjt2h\" (UID: \"8c92f553-9f22-485e-80a4-86b223a70ef7\") " pod="openshift-service-ca/service-ca-9c57cc56f-sjt2h" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672814 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-plugins-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672835 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c909d56d-484a-4d56-a279-1d464fe45dc8-metrics-tls\") pod \"dns-operator-744455d44c-kfghc\" (UID: \"c909d56d-484a-4d56-a279-1d464fe45dc8\") " pod="openshift-dns-operator/dns-operator-744455d44c-kfghc" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672854 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-csi-data-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672874 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8c61f40-7215-401b-9e07-74b95ed041cc-serving-cert\") pod \"service-ca-operator-777779d784-68sbb\" (UID: \"b8c61f40-7215-401b-9e07-74b95ed041cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-68sbb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672902 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tftv8\" (UniqueName: \"kubernetes.io/projected/34b5f044-91ef-40f8-93f3-bfe4bb970cc6-kube-api-access-tftv8\") pod \"openshift-apiserver-operator-796bbdcf4f-gxtrh\" (UID: \"34b5f044-91ef-40f8-93f3-bfe4bb970cc6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672918 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63141648-7cad-41e1-96e7-38c0305347b0-service-ca-bundle\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672939 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/99c5646a-3905-4010-9538-53090f0160f5-proxy-tls\") pod \"machine-config-controller-84d6567774-p68fz\" (UID: \"99c5646a-3905-4010-9538-53090f0160f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672953 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk4p4\" (UniqueName: \"kubernetes.io/projected/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-kube-api-access-dk4p4\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672970 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tpc9\" (UniqueName: \"kubernetes.io/projected/99c5646a-3905-4010-9538-53090f0160f5-kube-api-access-4tpc9\") pod \"machine-config-controller-84d6567774-p68fz\" (UID: \"99c5646a-3905-4010-9538-53090f0160f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.672995 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mkxz\" (UniqueName: \"kubernetes.io/projected/a9eec3bb-1807-465b-b272-cd2767e499d5-kube-api-access-4mkxz\") pod \"kube-storage-version-migrator-operator-b67b599dd-4s29q\" (UID: \"a9eec3bb-1807-465b-b272-cd2767e499d5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673016 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f72d4ca-090e-420f-894c-d4571bdab1a3-config\") pod \"kube-apiserver-operator-766d6c64bb-r5zp4\" (UID: \"2f72d4ca-090e-420f-894c-d4571bdab1a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673032 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8cb7b6c-98f4-4a07-97e6-53e371e4cac3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4jvkl\" (UID: \"a8cb7b6c-98f4-4a07-97e6-53e371e4cac3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673049 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsbjz\" (UniqueName: \"kubernetes.io/projected/217c0bda-35b4-4332-83de-9210f6906544-kube-api-access-fsbjz\") pod \"dns-default-vrbhw\" (UID: \"217c0bda-35b4-4332-83de-9210f6906544\") " pod="openshift-dns/dns-default-vrbhw" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673068 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzwb8\" (UniqueName: \"kubernetes.io/projected/13bee5ff-144b-42e3-b5ae-01ed760c979b-kube-api-access-mzwb8\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673093 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8c61f40-7215-401b-9e07-74b95ed041cc-config\") pod \"service-ca-operator-777779d784-68sbb\" (UID: \"b8c61f40-7215-401b-9e07-74b95ed041cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-68sbb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673111 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-mountpoint-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673133 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b21482e-04d8-493d-a274-670ce4961923-secret-volume\") pod \"collect-profiles-29424600-hpkv2\" (UID: \"2b21482e-04d8-493d-a274-670ce4961923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673155 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6a67db71-72a9-4d82-94e7-673e78b11dc6-srv-cert\") pod \"catalog-operator-68c6474976-lpcf4\" (UID: \"6a67db71-72a9-4d82-94e7-673e78b11dc6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673181 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/40febc91-7d7a-4130-acbb-6c8c434033df-tmpfs\") pod \"packageserver-d55dfcdfc-gx8gb\" (UID: \"40febc91-7d7a-4130-acbb-6c8c434033df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673225 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jxrz\" (UniqueName: \"kubernetes.io/projected/43bf03d3-95e6-47a5-8384-6f7cd56f2cc0-kube-api-access-8jxrz\") pod \"machine-config-operator-74547568cd-tbhjv\" (UID: \"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673256 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsfkr\" (UniqueName: \"kubernetes.io/projected/63141648-7cad-41e1-96e7-38c0305347b0-kube-api-access-zsfkr\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673275 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/13bee5ff-144b-42e3-b5ae-01ed760c979b-etcd-service-ca\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673290 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/40febc91-7d7a-4130-acbb-6c8c434033df-webhook-cert\") pod \"packageserver-d55dfcdfc-gx8gb\" (UID: \"40febc91-7d7a-4130-acbb-6c8c434033df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673305 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13bee5ff-144b-42e3-b5ae-01ed760c979b-config\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673328 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9eec3bb-1807-465b-b272-cd2767e499d5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4s29q\" (UID: \"a9eec3bb-1807-465b-b272-cd2767e499d5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673347 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63141648-7cad-41e1-96e7-38c0305347b0-metrics-certs\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673368 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfbnv\" (UniqueName: \"kubernetes.io/projected/40febc91-7d7a-4130-acbb-6c8c434033df-kube-api-access-jfbnv\") pod \"packageserver-d55dfcdfc-gx8gb\" (UID: \"40febc91-7d7a-4130-acbb-6c8c434033df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673411 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/59572212-89da-4750-a504-ded844d647b9-srv-cert\") pod \"olm-operator-6b444d44fb-hnn72\" (UID: \"59572212-89da-4750-a504-ded844d647b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673429 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24d8476c-b8ef-4336-b6b7-62a4faabb6a9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5n5hp\" (UID: \"24d8476c-b8ef-4336-b6b7-62a4faabb6a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673442 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmtdf\" (UniqueName: \"kubernetes.io/projected/59572212-89da-4750-a504-ded844d647b9-kube-api-access-qmtdf\") pod \"olm-operator-6b444d44fb-hnn72\" (UID: \"59572212-89da-4750-a504-ded844d647b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673462 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d75bx\" (UniqueName: \"kubernetes.io/projected/2b21482e-04d8-493d-a274-670ce4961923-kube-api-access-d75bx\") pod \"collect-profiles-29424600-hpkv2\" (UID: \"2b21482e-04d8-493d-a274-670ce4961923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673487 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b7fda896-ae7e-403b-bab7-365140566954-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c9fjz\" (UID: \"b7fda896-ae7e-403b-bab7-365140566954\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c9fjz" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673531 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-socket-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673557 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13bee5ff-144b-42e3-b5ae-01ed760c979b-serving-cert\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673577 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfs5h\" (UniqueName: \"kubernetes.io/projected/b8c61f40-7215-401b-9e07-74b95ed041cc-kube-api-access-pfs5h\") pod \"service-ca-operator-777779d784-68sbb\" (UID: \"b8c61f40-7215-401b-9e07-74b95ed041cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-68sbb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673595 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f72d4ca-090e-420f-894c-d4571bdab1a3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r5zp4\" (UID: \"2f72d4ca-090e-420f-894c-d4571bdab1a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673621 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b5f044-91ef-40f8-93f3-bfe4bb970cc6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gxtrh\" (UID: \"34b5f044-91ef-40f8-93f3-bfe4bb970cc6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673649 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gffzz\" (UniqueName: \"kubernetes.io/projected/fa0265e5-9837-4f97-891a-703b0e440df3-kube-api-access-gffzz\") pod \"marketplace-operator-79b997595-k2xpd\" (UID: \"fa0265e5-9837-4f97-891a-703b0e440df3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673667 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlb8w\" (UniqueName: \"kubernetes.io/projected/6a67db71-72a9-4d82-94e7-673e78b11dc6-kube-api-access-mlb8w\") pod \"catalog-operator-68c6474976-lpcf4\" (UID: \"6a67db71-72a9-4d82-94e7-673e78b11dc6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673684 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa0265e5-9837-4f97-891a-703b0e440df3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k2xpd\" (UID: \"fa0265e5-9837-4f97-891a-703b0e440df3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673726 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1dc1cc46-73f4-4e0b-a012-d9b9599ebc12-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4rglc\" (UID: \"1dc1cc46-73f4-4e0b-a012-d9b9599ebc12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673745 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8cb7b6c-98f4-4a07-97e6-53e371e4cac3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4jvkl\" (UID: \"a8cb7b6c-98f4-4a07-97e6-53e371e4cac3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673761 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e513abf7-304c-47b9-8940-8f17f618e491-certs\") pod \"machine-config-server-svrh5\" (UID: \"e513abf7-304c-47b9-8940-8f17f618e491\") " pod="openshift-machine-config-operator/machine-config-server-svrh5" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673791 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/13bee5ff-144b-42e3-b5ae-01ed760c979b-etcd-ca\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673809 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13bee5ff-144b-42e3-b5ae-01ed760c979b-etcd-client\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673830 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/217c0bda-35b4-4332-83de-9210f6906544-config-volume\") pod \"dns-default-vrbhw\" (UID: \"217c0bda-35b4-4332-83de-9210f6906544\") " pod="openshift-dns/dns-default-vrbhw" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673848 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/40febc91-7d7a-4130-acbb-6c8c434033df-apiservice-cert\") pod \"packageserver-d55dfcdfc-gx8gb\" (UID: \"40febc91-7d7a-4130-acbb-6c8c434033df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673882 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9h9g\" (UniqueName: \"kubernetes.io/projected/d4d0a314-0249-46d9-9bdc-af9b7e063110-kube-api-access-q9h9g\") pod \"migrator-59844c95c7-f2jq4\" (UID: \"d4d0a314-0249-46d9-9bdc-af9b7e063110\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f2jq4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673916 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/63141648-7cad-41e1-96e7-38c0305347b0-stats-auth\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673936 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85nh9\" (UniqueName: \"kubernetes.io/projected/74678aa2-d0e1-4db8-854b-4e545859f4b1-kube-api-access-85nh9\") pod \"ingress-canary-tnprh\" (UID: \"74678aa2-d0e1-4db8-854b-4e545859f4b1\") " pod="openshift-ingress-canary/ingress-canary-tnprh" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673956 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x76g2\" (UniqueName: \"kubernetes.io/projected/e513abf7-304c-47b9-8940-8f17f618e491-kube-api-access-x76g2\") pod \"machine-config-server-svrh5\" (UID: \"e513abf7-304c-47b9-8940-8f17f618e491\") " pod="openshift-machine-config-operator/machine-config-server-svrh5" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673972 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bad86dd9-d430-4692-9caf-5a7218fd02b7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4pjpv\" (UID: \"bad86dd9-d430-4692-9caf-5a7218fd02b7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.674023 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2szdq\" (UniqueName: \"kubernetes.io/projected/8c92f553-9f22-485e-80a4-86b223a70ef7-kube-api-access-2szdq\") pod \"service-ca-9c57cc56f-sjt2h\" (UID: \"8c92f553-9f22-485e-80a4-86b223a70ef7\") " pod="openshift-service-ca/service-ca-9c57cc56f-sjt2h" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.674044 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43bf03d3-95e6-47a5-8384-6f7cd56f2cc0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tbhjv\" (UID: \"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.674064 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9eec3bb-1807-465b-b272-cd2767e499d5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4s29q\" (UID: \"a9eec3bb-1807-465b-b272-cd2767e499d5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.674081 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43bf03d3-95e6-47a5-8384-6f7cd56f2cc0-proxy-tls\") pod \"machine-config-operator-74547568cd-tbhjv\" (UID: \"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.674114 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5s7l\" (UniqueName: \"kubernetes.io/projected/98cc800c-b8b6-49f9-94c0-42bb0c22eb76-kube-api-access-c5s7l\") pod \"control-plane-machine-set-operator-78cbb6b69f-kntl9\" (UID: \"98cc800c-b8b6-49f9-94c0-42bb0c22eb76\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kntl9" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.674132 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8c92f553-9f22-485e-80a4-86b223a70ef7-signing-key\") pod \"service-ca-9c57cc56f-sjt2h\" (UID: \"8c92f553-9f22-485e-80a4-86b223a70ef7\") " pod="openshift-service-ca/service-ca-9c57cc56f-sjt2h" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.674154 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n46b9\" (UniqueName: \"kubernetes.io/projected/1dc1cc46-73f4-4e0b-a012-d9b9599ebc12-kube-api-access-n46b9\") pod \"package-server-manager-789f6589d5-4rglc\" (UID: \"1dc1cc46-73f4-4e0b-a012-d9b9599ebc12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.674172 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs82p\" (UniqueName: \"kubernetes.io/projected/c909d56d-484a-4d56-a279-1d464fe45dc8-kube-api-access-fs82p\") pod \"dns-operator-744455d44c-kfghc\" (UID: \"c909d56d-484a-4d56-a279-1d464fe45dc8\") " pod="openshift-dns-operator/dns-operator-744455d44c-kfghc" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.674190 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/43bf03d3-95e6-47a5-8384-6f7cd56f2cc0-images\") pod \"machine-config-operator-74547568cd-tbhjv\" (UID: \"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.674213 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e513abf7-304c-47b9-8940-8f17f618e491-node-bootstrap-token\") pod \"machine-config-server-svrh5\" (UID: \"e513abf7-304c-47b9-8940-8f17f618e491\") " pod="openshift-machine-config-operator/machine-config-server-svrh5" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.674239 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bad86dd9-d430-4692-9caf-5a7218fd02b7-metrics-tls\") pod \"ingress-operator-5b745b69d9-4pjpv\" (UID: \"bad86dd9-d430-4692-9caf-5a7218fd02b7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.674257 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-registration-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.674278 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24d8476c-b8ef-4336-b6b7-62a4faabb6a9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5n5hp\" (UID: \"24d8476c-b8ef-4336-b6b7-62a4faabb6a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.675124 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-socket-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.675609 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.675891 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8cb7b6c-98f4-4a07-97e6-53e371e4cac3-config\") pod \"kube-controller-manager-operator-78b949d7b-4jvkl\" (UID: \"a8cb7b6c-98f4-4a07-97e6-53e371e4cac3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.676040 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-plugins-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.678026 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b5f044-91ef-40f8-93f3-bfe4bb970cc6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gxtrh\" (UID: \"34b5f044-91ef-40f8-93f3-bfe4bb970cc6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.678838 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13bee5ff-144b-42e3-b5ae-01ed760c979b-serving-cert\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.679320 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-registration-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.679695 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-csi-data-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.679929 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d8476c-b8ef-4336-b6b7-62a4faabb6a9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5n5hp\" (UID: \"24d8476c-b8ef-4336-b6b7-62a4faabb6a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.679429 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43bf03d3-95e6-47a5-8384-6f7cd56f2cc0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tbhjv\" (UID: \"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.680642 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63141648-7cad-41e1-96e7-38c0305347b0-service-ca-bundle\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.680993 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bad86dd9-d430-4692-9caf-5a7218fd02b7-trusted-ca\") pod \"ingress-operator-5b745b69d9-4pjpv\" (UID: \"bad86dd9-d430-4692-9caf-5a7218fd02b7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.681086 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9eec3bb-1807-465b-b272-cd2767e499d5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4s29q\" (UID: \"a9eec3bb-1807-465b-b272-cd2767e499d5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.681338 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/63141648-7cad-41e1-96e7-38c0305347b0-default-certificate\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.681807 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/13bee5ff-144b-42e3-b5ae-01ed760c979b-etcd-service-ca\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.681880 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-mountpoint-dir\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.681919 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/13bee5ff-144b-42e3-b5ae-01ed760c979b-etcd-ca\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.682120 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/63141648-7cad-41e1-96e7-38c0305347b0-stats-auth\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.682802 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b5f044-91ef-40f8-93f3-bfe4bb970cc6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gxtrh\" (UID: \"34b5f044-91ef-40f8-93f3-bfe4bb970cc6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.682451 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/40febc91-7d7a-4130-acbb-6c8c434033df-tmpfs\") pod \"packageserver-d55dfcdfc-gx8gb\" (UID: \"40febc91-7d7a-4130-acbb-6c8c434033df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.682860 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13bee5ff-144b-42e3-b5ae-01ed760c979b-config\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.673786 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99c5646a-3905-4010-9538-53090f0160f5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p68fz\" (UID: \"99c5646a-3905-4010-9538-53090f0160f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.684496 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa0265e5-9837-4f97-891a-703b0e440df3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k2xpd\" (UID: \"fa0265e5-9837-4f97-891a-703b0e440df3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.685405 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c909d56d-484a-4d56-a279-1d464fe45dc8-metrics-tls\") pod \"dns-operator-744455d44c-kfghc\" (UID: \"c909d56d-484a-4d56-a279-1d464fe45dc8\") " pod="openshift-dns-operator/dns-operator-744455d44c-kfghc" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.685790 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9eec3bb-1807-465b-b272-cd2767e499d5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4s29q\" (UID: \"a9eec3bb-1807-465b-b272-cd2767e499d5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.687023 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bad86dd9-d430-4692-9caf-5a7218fd02b7-metrics-tls\") pod \"ingress-operator-5b745b69d9-4pjpv\" (UID: \"bad86dd9-d430-4692-9caf-5a7218fd02b7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.688145 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f72d4ca-090e-420f-894c-d4571bdab1a3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r5zp4\" (UID: \"2f72d4ca-090e-420f-894c-d4571bdab1a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.688238 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/99c5646a-3905-4010-9538-53090f0160f5-proxy-tls\") pod \"machine-config-controller-84d6567774-p68fz\" (UID: \"99c5646a-3905-4010-9538-53090f0160f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.689723 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/13bee5ff-144b-42e3-b5ae-01ed760c979b-etcd-client\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.694245 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.695258 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa0265e5-9837-4f97-891a-703b0e440df3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k2xpd\" (UID: \"fa0265e5-9837-4f97-891a-703b0e440df3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.697700 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8cb7b6c-98f4-4a07-97e6-53e371e4cac3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4jvkl\" (UID: \"a8cb7b6c-98f4-4a07-97e6-53e371e4cac3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.698112 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63141648-7cad-41e1-96e7-38c0305347b0-metrics-certs\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.706999 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f72d4ca-090e-420f-894c-d4571bdab1a3-config\") pod \"kube-apiserver-operator-766d6c64bb-r5zp4\" (UID: \"2f72d4ca-090e-420f-894c-d4571bdab1a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.715329 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.735096 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.754625 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.763770 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8c61f40-7215-401b-9e07-74b95ed041cc-serving-cert\") pod \"service-ca-operator-777779d784-68sbb\" (UID: \"b8c61f40-7215-401b-9e07-74b95ed041cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-68sbb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.774890 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.775349 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:56 crc kubenswrapper[4877]: E1211 18:02:56.775476 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.275431632 +0000 UTC m=+138.301675676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.778817 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: E1211 18:02:56.779488 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.27946105 +0000 UTC m=+138.305705134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.785291 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8c61f40-7215-401b-9e07-74b95ed041cc-config\") pod \"service-ca-operator-777779d784-68sbb\" (UID: \"b8c61f40-7215-401b-9e07-74b95ed041cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-68sbb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.793850 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.815827 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.834601 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.851156 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1dc1cc46-73f4-4e0b-a012-d9b9599ebc12-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4rglc\" (UID: \"1dc1cc46-73f4-4e0b-a012-d9b9599ebc12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.859438 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.865443 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/40febc91-7d7a-4130-acbb-6c8c434033df-webhook-cert\") pod \"packageserver-d55dfcdfc-gx8gb\" (UID: \"40febc91-7d7a-4130-acbb-6c8c434033df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.866528 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/40febc91-7d7a-4130-acbb-6c8c434033df-apiservice-cert\") pod \"packageserver-d55dfcdfc-gx8gb\" (UID: \"40febc91-7d7a-4130-acbb-6c8c434033df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.874974 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.880804 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:56 crc kubenswrapper[4877]: E1211 18:02:56.881702 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.381673746 +0000 UTC m=+138.407917930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.894598 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.914785 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.923037 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7ffdw"] Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.929104 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/59572212-89da-4750-a504-ded844d647b9-srv-cert\") pod \"olm-operator-6b444d44fb-hnn72\" (UID: \"59572212-89da-4750-a504-ded844d647b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.934155 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.941719 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/59572212-89da-4750-a504-ded844d647b9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hnn72\" (UID: \"59572212-89da-4750-a504-ded844d647b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.948614 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6a67db71-72a9-4d82-94e7-673e78b11dc6-profile-collector-cert\") pod \"catalog-operator-68c6474976-lpcf4\" (UID: \"6a67db71-72a9-4d82-94e7-673e78b11dc6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.949733 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b21482e-04d8-493d-a274-670ce4961923-secret-volume\") pod \"collect-profiles-29424600-hpkv2\" (UID: \"2b21482e-04d8-493d-a274-670ce4961923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.953127 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" event={"ID":"efaee247-0579-47df-b29f-a6009d7302c3","Type":"ContainerStarted","Data":"32a48da3375a569368e9554fb3d488e0c699b4c4d13f1eaa8be92230167eba1f"} Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.954350 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.975530 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.984687 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:56 crc kubenswrapper[4877]: E1211 18:02:56.985885 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.485833524 +0000 UTC m=+138.512077568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:56 crc kubenswrapper[4877]: I1211 18:02:56.993335 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.013402 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.028551 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6a67db71-72a9-4d82-94e7-673e78b11dc6-srv-cert\") pod \"catalog-operator-68c6474976-lpcf4\" (UID: \"6a67db71-72a9-4d82-94e7-673e78b11dc6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.034062 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.037416 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b7fda896-ae7e-403b-bab7-365140566954-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c9fjz\" (UID: \"b7fda896-ae7e-403b-bab7-365140566954\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c9fjz" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.054470 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.074273 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.086403 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:57 crc kubenswrapper[4877]: E1211 18:02:57.086700 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.586666064 +0000 UTC m=+138.612910108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.087087 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:57 crc kubenswrapper[4877]: E1211 18:02:57.087732 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.587707402 +0000 UTC m=+138.613951446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.094288 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.101519 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/98cc800c-b8b6-49f9-94c0-42bb0c22eb76-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kntl9\" (UID: \"98cc800c-b8b6-49f9-94c0-42bb0c22eb76\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kntl9" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.114142 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.133580 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.153978 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.164179 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8c92f553-9f22-485e-80a4-86b223a70ef7-signing-key\") pod \"service-ca-9c57cc56f-sjt2h\" (UID: \"8c92f553-9f22-485e-80a4-86b223a70ef7\") " pod="openshift-service-ca/service-ca-9c57cc56f-sjt2h" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.175998 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.189087 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:57 crc kubenswrapper[4877]: E1211 18:02:57.189336 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.689288401 +0000 UTC m=+138.715532445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.189699 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:57 crc kubenswrapper[4877]: E1211 18:02:57.190105 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.690089123 +0000 UTC m=+138.716333157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.193561 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.197796 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8c92f553-9f22-485e-80a4-86b223a70ef7-signing-cabundle\") pod \"service-ca-9c57cc56f-sjt2h\" (UID: \"8c92f553-9f22-485e-80a4-86b223a70ef7\") " pod="openshift-service-ca/service-ca-9c57cc56f-sjt2h" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.214420 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.219808 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/43bf03d3-95e6-47a5-8384-6f7cd56f2cc0-images\") pod \"machine-config-operator-74547568cd-tbhjv\" (UID: \"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.234230 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.254935 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.265244 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43bf03d3-95e6-47a5-8384-6f7cd56f2cc0-proxy-tls\") pod \"machine-config-operator-74547568cd-tbhjv\" (UID: \"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.274014 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.290819 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:57 crc kubenswrapper[4877]: E1211 18:02:57.291216 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.791171909 +0000 UTC m=+138.817416003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.291877 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:57 crc kubenswrapper[4877]: E1211 18:02:57.292332 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.792314329 +0000 UTC m=+138.818558373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.294126 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.296620 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b21482e-04d8-493d-a274-670ce4961923-config-volume\") pod \"collect-profiles-29424600-hpkv2\" (UID: \"2b21482e-04d8-493d-a274-670ce4961923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.314293 4877 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.334842 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.354045 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.374117 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.383962 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e513abf7-304c-47b9-8940-8f17f618e491-node-bootstrap-token\") pod \"machine-config-server-svrh5\" (UID: \"e513abf7-304c-47b9-8940-8f17f618e491\") " pod="openshift-machine-config-operator/machine-config-server-svrh5" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.393154 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:57 crc kubenswrapper[4877]: E1211 18:02:57.394016 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.893997732 +0000 UTC m=+138.920241776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.394156 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.405851 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e513abf7-304c-47b9-8940-8f17f618e491-certs\") pod \"machine-config-server-svrh5\" (UID: \"e513abf7-304c-47b9-8940-8f17f618e491\") " pod="openshift-machine-config-operator/machine-config-server-svrh5" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.415007 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.433442 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.439745 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/217c0bda-35b4-4332-83de-9210f6906544-metrics-tls\") pod \"dns-default-vrbhw\" (UID: \"217c0bda-35b4-4332-83de-9210f6906544\") " pod="openshift-dns/dns-default-vrbhw" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.453902 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.474697 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.482407 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/217c0bda-35b4-4332-83de-9210f6906544-config-volume\") pod \"dns-default-vrbhw\" (UID: \"217c0bda-35b4-4332-83de-9210f6906544\") " pod="openshift-dns/dns-default-vrbhw" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.495623 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:57 crc kubenswrapper[4877]: E1211 18:02:57.496735 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:57.99669071 +0000 UTC m=+139.022934964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.510719 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9whj5\" (UniqueName: \"kubernetes.io/projected/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-kube-api-access-9whj5\") pod \"oauth-openshift-558db77b4-kqnqb\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.528645 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbd78\" (UniqueName: \"kubernetes.io/projected/df9b93ff-b380-4673-9fc2-6c50f0523377-kube-api-access-sbd78\") pod \"openshift-config-operator-7777fb866f-sjpc9\" (UID: \"df9b93ff-b380-4673-9fc2-6c50f0523377\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.550946 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e37c4bcf-f60b-44fa-b0c3-85eecf0e785b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2c697\" (UID: \"e37c4bcf-f60b-44fa-b0c3-85eecf0e785b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.593967 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.597213 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:57 crc kubenswrapper[4877]: E1211 18:02:57.597426 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:58.097369085 +0000 UTC m=+139.123613129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.597798 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:57 crc kubenswrapper[4877]: E1211 18:02:57.598212 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:58.098203898 +0000 UTC m=+139.124447942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.602623 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.615319 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.622398 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/74678aa2-d0e1-4db8-854b-4e545859f4b1-cert\") pod \"ingress-canary-tnprh\" (UID: \"74678aa2-d0e1-4db8-854b-4e545859f4b1\") " pod="openshift-ingress-canary/ingress-canary-tnprh" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.634570 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.655673 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.673189 4877 request.go:700] Waited for 1.458304988s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&limit=500&resourceVersion=0 Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.675199 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.694941 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.699121 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:57 crc kubenswrapper[4877]: E1211 18:02:57.699924 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:58.19990074 +0000 UTC m=+139.226144784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.719395 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.734928 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.756818 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.797297 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx4vd\" (UniqueName: \"kubernetes.io/projected/884770ef-a741-47ce-bdde-79844ff9f886-kube-api-access-qx4vd\") pod \"route-controller-manager-6576b87f9c-mgg8t\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.801540 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:57 crc kubenswrapper[4877]: E1211 18:02:57.801872 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:58.30185772 +0000 UTC m=+139.328101764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.804785 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.810628 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmvx4\" (UniqueName: \"kubernetes.io/projected/098a78c0-969d-4ab8-97e7-6d8c2b6be90a-kube-api-access-qmvx4\") pod \"apiserver-7bbb656c7d-nvhsm\" (UID: \"098a78c0-969d-4ab8-97e7-6d8c2b6be90a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.819147 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9"] Dec 11 18:02:57 crc kubenswrapper[4877]: W1211 18:02:57.844357 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf9b93ff_b380_4673_9fc2_6c50f0523377.slice/crio-d83bed99e14203f327dd5121e9af9e76edf22ab2311e47d614b9677ebeefe218 WatchSource:0}: Error finding container d83bed99e14203f327dd5121e9af9e76edf22ab2311e47d614b9677ebeefe218: Status 404 returned error can't find the container with id d83bed99e14203f327dd5121e9af9e76edf22ab2311e47d614b9677ebeefe218 Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.847634 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjdnh\" (UniqueName: \"kubernetes.io/projected/92aca579-224f-4b7e-9150-45ae61d15ca6-kube-api-access-tjdnh\") pod \"cluster-samples-operator-665b6dd947-ft6df\" (UID: \"92aca579-224f-4b7e-9150-45ae61d15ca6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.858495 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf6j7\" (UniqueName: \"kubernetes.io/projected/d8bc79c0-914b-4875-a155-01d87a497f69-kube-api-access-tf6j7\") pod \"authentication-operator-69f744f599-jwq6x\" (UID: \"d8bc79c0-914b-4875-a155-01d87a497f69\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.869187 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc924\" (UniqueName: \"kubernetes.io/projected/a2a6190e-6fb8-4f61-99e1-63972f85df6d-kube-api-access-vc924\") pod \"machine-approver-56656f9798-vxx99\" (UID: \"a2a6190e-6fb8-4f61-99e1-63972f85df6d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.891170 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpk96\" (UniqueName: \"kubernetes.io/projected/ace78071-47e6-4240-9b3f-e677ac9c360d-kube-api-access-jpk96\") pod \"openshift-controller-manager-operator-756b6f6bc6-2gt7h\" (UID: \"ace78071-47e6-4240-9b3f-e677ac9c360d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.903173 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:57 crc kubenswrapper[4877]: E1211 18:02:57.904164 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:58.404141778 +0000 UTC m=+139.430385822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.912703 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zcgf\" (UniqueName: \"kubernetes.io/projected/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-kube-api-access-9zcgf\") pod \"console-f9d7485db-7bj9b\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.922953 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.929985 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-bound-sa-token\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.941969 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.943566 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.949082 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgrbw\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-kube-api-access-vgrbw\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.969998 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s552c\" (UniqueName: \"kubernetes.io/projected/1cae51ed-b80c-4017-9b9f-1485a809f145-kube-api-access-s552c\") pod \"downloads-7954f5f757-cvvxz\" (UID: \"1cae51ed-b80c-4017-9b9f-1485a809f145\") " pod="openshift-console/downloads-7954f5f757-cvvxz" Dec 11 18:02:57 crc kubenswrapper[4877]: I1211 18:02:57.988592 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8fsk\" (UniqueName: \"kubernetes.io/projected/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-kube-api-access-w8fsk\") pod \"controller-manager-879f6c89f-cxfwb\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.000136 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.005633 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.006174 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:58.506156739 +0000 UTC m=+139.532400793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.021400 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7djd\" (UniqueName: \"kubernetes.io/projected/6bc6c140-29ac-419b-b97c-7d43b92b2cc1-kube-api-access-r7djd\") pod \"console-operator-58897d9998-p7gb9\" (UID: \"6bc6c140-29ac-419b-b97c-7d43b92b2cc1\") " pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.021751 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kqnqb"] Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.022214 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.042720 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.043436 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p9dc\" (UniqueName: \"kubernetes.io/projected/b7fda896-ae7e-403b-bab7-365140566954-kube-api-access-4p9dc\") pod \"multus-admission-controller-857f4d67dd-c9fjz\" (UID: \"b7fda896-ae7e-403b-bab7-365140566954\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c9fjz" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.057960 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t"] Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.066794 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24d8476c-b8ef-4336-b6b7-62a4faabb6a9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5n5hp\" (UID: \"24d8476c-b8ef-4336-b6b7-62a4faabb6a9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.074771 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmtdf\" (UniqueName: \"kubernetes.io/projected/59572212-89da-4750-a504-ded844d647b9-kube-api-access-qmtdf\") pod \"olm-operator-6b444d44fb-hnn72\" (UID: \"59572212-89da-4750-a504-ded844d647b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.100495 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d75bx\" (UniqueName: \"kubernetes.io/projected/2b21482e-04d8-493d-a274-670ce4961923-kube-api-access-d75bx\") pod \"collect-profiles-29424600-hpkv2\" (UID: \"2b21482e-04d8-493d-a274-670ce4961923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.107247 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.107622 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:58.607583735 +0000 UTC m=+139.633827779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.108000 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.109842 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:58.609320661 +0000 UTC m=+139.635564705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.110855 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4tzv\" (UniqueName: \"kubernetes.io/projected/bad86dd9-d430-4692-9caf-5a7218fd02b7-kube-api-access-c4tzv\") pod \"ingress-operator-5b745b69d9-4pjpv\" (UID: \"bad86dd9-d430-4692-9caf-5a7218fd02b7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.116628 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.124633 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cvvxz" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.127347 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f72d4ca-090e-420f-894c-d4571bdab1a3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r5zp4\" (UID: \"2f72d4ca-090e-420f-894c-d4571bdab1a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.140392 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.155624 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm"] Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.174404 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h"] Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.192071 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x76g2\" (UniqueName: \"kubernetes.io/projected/e513abf7-304c-47b9-8940-8f17f618e491-kube-api-access-x76g2\") pod \"machine-config-server-svrh5\" (UID: \"e513abf7-304c-47b9-8940-8f17f618e491\") " pod="openshift-machine-config-operator/machine-config-server-svrh5" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.208468 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bad86dd9-d430-4692-9caf-5a7218fd02b7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4pjpv\" (UID: \"bad86dd9-d430-4692-9caf-5a7218fd02b7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.209765 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.209971 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:58.709936365 +0000 UTC m=+139.736180399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.210304 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.210837 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:58.710830009 +0000 UTC m=+139.737074053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.245763 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.252093 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n46b9\" (UniqueName: \"kubernetes.io/projected/1dc1cc46-73f4-4e0b-a012-d9b9599ebc12-kube-api-access-n46b9\") pod \"package-server-manager-789f6589d5-4rglc\" (UID: \"1dc1cc46-73f4-4e0b-a012-d9b9599ebc12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.253807 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.268435 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.271257 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs82p\" (UniqueName: \"kubernetes.io/projected/c909d56d-484a-4d56-a279-1d464fe45dc8-kube-api-access-fs82p\") pod \"dns-operator-744455d44c-kfghc\" (UID: \"c909d56d-484a-4d56-a279-1d464fe45dc8\") " pod="openshift-dns-operator/dns-operator-744455d44c-kfghc" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.278039 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.294770 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.308055 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tftv8\" (UniqueName: \"kubernetes.io/projected/34b5f044-91ef-40f8-93f3-bfe4bb970cc6-kube-api-access-tftv8\") pod \"openshift-apiserver-operator-796bbdcf4f-gxtrh\" (UID: \"34b5f044-91ef-40f8-93f3-bfe4bb970cc6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.309705 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mkxz\" (UniqueName: \"kubernetes.io/projected/a9eec3bb-1807-465b-b272-cd2767e499d5-kube-api-access-4mkxz\") pod \"kube-storage-version-migrator-operator-b67b599dd-4s29q\" (UID: \"a9eec3bb-1807-465b-b272-cd2767e499d5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.311841 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.312080 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:58.812041149 +0000 UTC m=+139.838285233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.312495 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.312952 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:58.812932862 +0000 UTC m=+139.839176906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.335800 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c9fjz" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.341058 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk4p4\" (UniqueName: \"kubernetes.io/projected/a531323f-a1c6-4ea2-bc5d-f4872beb8b60-kube-api-access-dk4p4\") pod \"csi-hostpathplugin-842ws\" (UID: \"a531323f-a1c6-4ea2-bc5d-f4872beb8b60\") " pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.355369 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.364881 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tpc9\" (UniqueName: \"kubernetes.io/projected/99c5646a-3905-4010-9538-53090f0160f5-kube-api-access-4tpc9\") pod \"machine-config-controller-84d6567774-p68fz\" (UID: \"99c5646a-3905-4010-9538-53090f0160f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.371734 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gffzz\" (UniqueName: \"kubernetes.io/projected/fa0265e5-9837-4f97-891a-703b0e440df3-kube-api-access-gffzz\") pod \"marketplace-operator-79b997595-k2xpd\" (UID: \"fa0265e5-9837-4f97-891a-703b0e440df3\") " pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.378212 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-842ws" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.389054 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-svrh5" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.395854 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8cb7b6c-98f4-4a07-97e6-53e371e4cac3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4jvkl\" (UID: \"a8cb7b6c-98f4-4a07-97e6-53e371e4cac3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.410671 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsbjz\" (UniqueName: \"kubernetes.io/projected/217c0bda-35b4-4332-83de-9210f6906544-kube-api-access-fsbjz\") pod \"dns-default-vrbhw\" (UID: \"217c0bda-35b4-4332-83de-9210f6906544\") " pod="openshift-dns/dns-default-vrbhw" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.413825 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.414024 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:58.913990138 +0000 UTC m=+139.940234192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.414463 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.414934 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:58.914919423 +0000 UTC m=+139.941163657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.440900 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzwb8\" (UniqueName: \"kubernetes.io/projected/13bee5ff-144b-42e3-b5ae-01ed760c979b-kube-api-access-mzwb8\") pod \"etcd-operator-b45778765-ftfkv\" (UID: \"13bee5ff-144b-42e3-b5ae-01ed760c979b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.451998 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfs5h\" (UniqueName: \"kubernetes.io/projected/b8c61f40-7215-401b-9e07-74b95ed041cc-kube-api-access-pfs5h\") pod \"service-ca-operator-777779d784-68sbb\" (UID: \"b8c61f40-7215-401b-9e07-74b95ed041cc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-68sbb" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.455027 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbvzf\" (UniqueName: \"kubernetes.io/projected/1e05b4cd-2477-4d69-804f-c8dc59d6da3d-kube-api-access-nbvzf\") pod \"machine-api-operator-5694c8668f-4c92n\" (UID: \"1e05b4cd-2477-4d69-804f-c8dc59d6da3d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.455122 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvgx9\" (UniqueName: \"kubernetes.io/projected/e37c4bcf-f60b-44fa-b0c3-85eecf0e785b-kube-api-access-kvgx9\") pod \"cluster-image-registry-operator-dc59b4c8b-2c697\" (UID: \"e37c4bcf-f60b-44fa-b0c3-85eecf0e785b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.456212 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85nh9\" (UniqueName: \"kubernetes.io/projected/74678aa2-d0e1-4db8-854b-4e545859f4b1-kube-api-access-85nh9\") pod \"ingress-canary-tnprh\" (UID: \"74678aa2-d0e1-4db8-854b-4e545859f4b1\") " pod="openshift-ingress-canary/ingress-canary-tnprh" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.459147 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5s7l\" (UniqueName: \"kubernetes.io/projected/98cc800c-b8b6-49f9-94c0-42bb0c22eb76-kube-api-access-c5s7l\") pod \"control-plane-machine-set-operator-78cbb6b69f-kntl9\" (UID: \"98cc800c-b8b6-49f9-94c0-42bb0c22eb76\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kntl9" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.460345 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2szdq\" (UniqueName: \"kubernetes.io/projected/8c92f553-9f22-485e-80a4-86b223a70ef7-kube-api-access-2szdq\") pod \"service-ca-9c57cc56f-sjt2h\" (UID: \"8c92f553-9f22-485e-80a4-86b223a70ef7\") " pod="openshift-service-ca/service-ca-9c57cc56f-sjt2h" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.485797 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.487170 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jxrz\" (UniqueName: \"kubernetes.io/projected/43bf03d3-95e6-47a5-8384-6f7cd56f2cc0-kube-api-access-8jxrz\") pod \"machine-config-operator-74547568cd-tbhjv\" (UID: \"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.491453 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.498912 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.498941 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsfkr\" (UniqueName: \"kubernetes.io/projected/63141648-7cad-41e1-96e7-38c0305347b0-kube-api-access-zsfkr\") pod \"router-default-5444994796-mdpb8\" (UID: \"63141648-7cad-41e1-96e7-38c0305347b0\") " pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.506817 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kfghc" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.512124 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9h9g\" (UniqueName: \"kubernetes.io/projected/d4d0a314-0249-46d9-9bdc-af9b7e063110-kube-api-access-q9h9g\") pod \"migrator-59844c95c7-f2jq4\" (UID: \"d4d0a314-0249-46d9-9bdc-af9b7e063110\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f2jq4" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.514589 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.518808 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.519399 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:59.019351678 +0000 UTC m=+140.045595732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.521989 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.528814 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.533673 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfbnv\" (UniqueName: \"kubernetes.io/projected/40febc91-7d7a-4130-acbb-6c8c434033df-kube-api-access-jfbnv\") pod \"packageserver-d55dfcdfc-gx8gb\" (UID: \"40febc91-7d7a-4130-acbb-6c8c434033df\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.537942 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.556938 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlb8w\" (UniqueName: \"kubernetes.io/projected/6a67db71-72a9-4d82-94e7-673e78b11dc6-kube-api-access-mlb8w\") pod \"catalog-operator-68c6474976-lpcf4\" (UID: \"6a67db71-72a9-4d82-94e7-673e78b11dc6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.561939 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.569195 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-68sbb" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.588015 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.605258 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f2jq4" Dec 11 18:02:58 crc kubenswrapper[4877]: W1211 18:02:58.606804 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode513abf7_304c_47b9_8940_8f17f618e491.slice/crio-e3291770de76c1cbdd5a5dd34872c3c02133873fb0b73ca788b439f1131c01c9 WatchSource:0}: Error finding container e3291770de76c1cbdd5a5dd34872c3c02133873fb0b73ca788b439f1131c01c9: Status 404 returned error can't find the container with id e3291770de76c1cbdd5a5dd34872c3c02133873fb0b73ca788b439f1131c01c9 Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.612873 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.620727 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.621160 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:59.121146874 +0000 UTC m=+140.147390918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.633584 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kntl9" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.637248 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-sjt2h" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.648906 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.698237 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.701255 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vrbhw" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.710543 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tnprh" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.721748 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df"] Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.722221 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.722444 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:59.222417935 +0000 UTC m=+140.248661979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.722648 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.723103 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:59.223096013 +0000 UTC m=+140.249340057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.757957 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.826297 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.826545 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:59.326509611 +0000 UTC m=+140.352753665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.827443 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.827818 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:59.327808436 +0000 UTC m=+140.354052480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.928491 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.928699 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:59.428666966 +0000 UTC m=+140.454911010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.928932 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:58 crc kubenswrapper[4877]: E1211 18:02:58.929600 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:59.429577601 +0000 UTC m=+140.455821635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:58 crc kubenswrapper[4877]: I1211 18:02:58.941157 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c9fjz"] Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.030321 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:59 crc kubenswrapper[4877]: E1211 18:02:59.030568 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:59.530525903 +0000 UTC m=+140.556769947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.031003 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:59 crc kubenswrapper[4877]: E1211 18:02:59.032074 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:59.532049014 +0000 UTC m=+140.558293058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.132799 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:59 crc kubenswrapper[4877]: E1211 18:02:59.133342 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:59.633279174 +0000 UTC m=+140.659523378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.187111 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72"] Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.236033 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:59 crc kubenswrapper[4877]: E1211 18:02:59.236538 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:59.736522468 +0000 UTC m=+140.762766512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.277053 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2"] Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.285534 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp"] Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.346682 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:59 crc kubenswrapper[4877]: E1211 18:02:59.347245 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:02:59.847222931 +0000 UTC m=+140.873466975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.453154 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:59 crc kubenswrapper[4877]: E1211 18:02:59.453855 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:02:59.953818194 +0000 UTC m=+140.980062238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.457754 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cvvxz"] Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.498284 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4"] Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.519857 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp" event={"ID":"24d8476c-b8ef-4336-b6b7-62a4faabb6a9","Type":"ContainerStarted","Data":"0131a097ad1563d60a642fe21fb9b4c9f6842d10ca9278c5eb682e36284b338f"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.523469 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mdpb8" event={"ID":"63141648-7cad-41e1-96e7-38c0305347b0","Type":"ContainerStarted","Data":"d2142c427b9dccd1e508a9fd4f18373182f067a1099660eb636ffe5483fb1ebb"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.535456 4877 generic.go:334] "Generic (PLEG): container finished" podID="df9b93ff-b380-4673-9fc2-6c50f0523377" containerID="fabd81a6dc4e6aff50fbeada8fde2373524c301a54027c8baaaff8a4a80ccd8f" exitCode=0 Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.535859 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" event={"ID":"df9b93ff-b380-4673-9fc2-6c50f0523377","Type":"ContainerDied","Data":"fabd81a6dc4e6aff50fbeada8fde2373524c301a54027c8baaaff8a4a80ccd8f"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.535941 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" event={"ID":"df9b93ff-b380-4673-9fc2-6c50f0523377","Type":"ContainerStarted","Data":"d83bed99e14203f327dd5121e9af9e76edf22ab2311e47d614b9677ebeefe218"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.548141 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c9fjz" event={"ID":"b7fda896-ae7e-403b-bab7-365140566954","Type":"ContainerStarted","Data":"49515b258f126d188165b17aefacfc0953a2818a2bfc69d8476e43a506b990b8"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.551149 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" event={"ID":"a2a6190e-6fb8-4f61-99e1-63972f85df6d","Type":"ContainerStarted","Data":"a09af7a7b1d0c2d28b8f2fda77284cc56ebdfadc9ce578bc713a739679cca742"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.560978 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:59 crc kubenswrapper[4877]: E1211 18:02:59.561549 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:00.061525667 +0000 UTC m=+141.087769711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.622753 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" event={"ID":"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8","Type":"ContainerStarted","Data":"35576680f1780ebccc8e7e25b046f2a992774a6fcd101e70caf0514b403b45ca"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.663313 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:59 crc kubenswrapper[4877]: E1211 18:02:59.664995 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:00.164970666 +0000 UTC m=+141.191214790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.677115 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df" event={"ID":"92aca579-224f-4b7e-9150-45ae61d15ca6","Type":"ContainerStarted","Data":"9b9ea168dcc699302e9f257cf25fafd02f061f6a46cbcda46847600c4f6becfc"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.697795 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h" event={"ID":"ace78071-47e6-4240-9b3f-e677ac9c360d","Type":"ContainerStarted","Data":"3531d69a1573d4bdc5885276cc071ac1b08ee8ca9de38ad323cadbd6be9e2b8b"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.700040 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" event={"ID":"098a78c0-969d-4ab8-97e7-6d8c2b6be90a","Type":"ContainerStarted","Data":"852b5178dcf818105d1fc89cfaa4595c81f83a80ae6e9034b0ccc53d6ff68be2"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.717071 4877 generic.go:334] "Generic (PLEG): container finished" podID="efaee247-0579-47df-b29f-a6009d7302c3" containerID="c1d23c114c6c8fcfcb997d0ab37fc64ebd933b3b89a95da6f273c9efe5ea4ae1" exitCode=0 Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.717161 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" event={"ID":"efaee247-0579-47df-b29f-a6009d7302c3","Type":"ContainerDied","Data":"c1d23c114c6c8fcfcb997d0ab37fc64ebd933b3b89a95da6f273c9efe5ea4ae1"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.719178 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" event={"ID":"2b21482e-04d8-493d-a274-670ce4961923","Type":"ContainerStarted","Data":"3643c4f7cd82bbd90f907290323f4cf0a8e774f131ba026dc33c5a89db0828b1"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.722411 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-svrh5" event={"ID":"e513abf7-304c-47b9-8940-8f17f618e491","Type":"ContainerStarted","Data":"e3291770de76c1cbdd5a5dd34872c3c02133873fb0b73ca788b439f1131c01c9"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.724645 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" event={"ID":"884770ef-a741-47ce-bdde-79844ff9f886","Type":"ContainerStarted","Data":"ffc38f4e6fdb737619e3c761ec823635c1c489c91ee6f03c3667f743b1229a58"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.724712 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" event={"ID":"884770ef-a741-47ce-bdde-79844ff9f886","Type":"ContainerStarted","Data":"b8d8648d41ab66ba29d7f7a8de66885b1b1bd53b099f6e9c80d5a8c77fb7ed6a"} Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.725359 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.764364 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:59 crc kubenswrapper[4877]: E1211 18:02:59.764557 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:00.264498241 +0000 UTC m=+141.290742285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.764694 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:59 crc kubenswrapper[4877]: E1211 18:02:59.765112 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:00.265097767 +0000 UTC m=+141.291341811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.867440 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:59 crc kubenswrapper[4877]: E1211 18:02:59.867951 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:00.3679231 +0000 UTC m=+141.394167144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.868258 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:02:59 crc kubenswrapper[4877]: E1211 18:02:59.872562 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:00.372552023 +0000 UTC m=+141.398796067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:02:59 crc kubenswrapper[4877]: I1211 18:02:59.971235 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:02:59 crc kubenswrapper[4877]: E1211 18:02:59.975127 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:00.475062748 +0000 UTC m=+141.501306812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.073326 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:00 crc kubenswrapper[4877]: E1211 18:03:00.073803 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:00.573787221 +0000 UTC m=+141.600031255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.175323 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:00 crc kubenswrapper[4877]: E1211 18:03:00.176015 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:00.675990317 +0000 UTC m=+141.702234361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.230771 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" podStartSLOduration=120.230744348 podStartE2EDuration="2m0.230744348s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:00.227500681 +0000 UTC m=+141.253744725" watchObservedRunningTime="2025-12-11 18:03:00.230744348 +0000 UTC m=+141.256988392" Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.266605 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-svrh5" podStartSLOduration=5.266561993 podStartE2EDuration="5.266561993s" podCreationTimestamp="2025-12-11 18:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:00.264113938 +0000 UTC m=+141.290357982" watchObservedRunningTime="2025-12-11 18:03:00.266561993 +0000 UTC m=+141.292806037" Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.282358 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:00 crc kubenswrapper[4877]: E1211 18:03:00.282897 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:00.782878489 +0000 UTC m=+141.809122533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.318787 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h" podStartSLOduration=120.318764716 podStartE2EDuration="2m0.318764716s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:00.317295877 +0000 UTC m=+141.343539931" watchObservedRunningTime="2025-12-11 18:03:00.318764716 +0000 UTC m=+141.345008760" Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.386297 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:00 crc kubenswrapper[4877]: E1211 18:03:00.386796 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:00.88677387 +0000 UTC m=+141.913017914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.487911 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:00 crc kubenswrapper[4877]: E1211 18:03:00.488458 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:00.988440012 +0000 UTC m=+142.014684056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.589155 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:00 crc kubenswrapper[4877]: E1211 18:03:00.589838 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:01.089814836 +0000 UTC m=+142.116058880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.606615 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.694353 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:00 crc kubenswrapper[4877]: E1211 18:03:00.694864 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:01.194847637 +0000 UTC m=+142.221091681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.744562 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cvvxz" event={"ID":"1cae51ed-b80c-4017-9b9f-1485a809f145","Type":"ContainerStarted","Data":"7d449925741ca66efd0bbe93821ed0b376b4647164182d2c1a0573fcee720316"} Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.748759 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2gt7h" event={"ID":"ace78071-47e6-4240-9b3f-e677ac9c360d","Type":"ContainerStarted","Data":"75adff35f16dacd666521a56fe31076c0cbdc910288c2ac3d2a2e6ee3f46d9af"} Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.750520 4877 generic.go:334] "Generic (PLEG): container finished" podID="098a78c0-969d-4ab8-97e7-6d8c2b6be90a" containerID="2b33b7f40c374ecd7aaea0f24bc9dfa1a2168d5f5bdc2f5e372c22f257c92124" exitCode=0 Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.750797 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" event={"ID":"098a78c0-969d-4ab8-97e7-6d8c2b6be90a","Type":"ContainerDied","Data":"2b33b7f40c374ecd7aaea0f24bc9dfa1a2168d5f5bdc2f5e372c22f257c92124"} Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.752115 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl"] Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.753033 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mdpb8" event={"ID":"63141648-7cad-41e1-96e7-38c0305347b0","Type":"ContainerStarted","Data":"05aa829ba13d715cc3756c783d32d80b7b640b6eb4f6e87623febf398ab78db7"} Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.768942 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4" event={"ID":"2f72d4ca-090e-420f-894c-d4571bdab1a3","Type":"ContainerStarted","Data":"ff07c59605396fbb3e90af448dccf9f5e5776f88a34a23cea059dc6456ddcd8c"} Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.782563 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" event={"ID":"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8","Type":"ContainerStarted","Data":"81974eb527b02de4a7795bc767f5f82e3e01b3e795eb78740f8f2e56bb15437b"} Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.782931 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.797005 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cxfwb"] Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.797860 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:00 crc kubenswrapper[4877]: E1211 18:03:00.798350 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:01.298328617 +0000 UTC m=+142.324572661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.817093 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df" event={"ID":"92aca579-224f-4b7e-9150-45ae61d15ca6","Type":"ContainerStarted","Data":"1edfd00f8c862085b43a354263b65eed94ed99f3c852bb4d4694224e50153eeb"} Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.835482 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" event={"ID":"2b21482e-04d8-493d-a274-670ce4961923","Type":"ContainerStarted","Data":"150baca428c2169e9173e557e7a5bd0ec016443dd6b0aa88006b1055e7af5ea3"} Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.852591 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" event={"ID":"59572212-89da-4750-a504-ded844d647b9","Type":"ContainerStarted","Data":"65d4acb33fc27e1fbfb495bd53b5b7969b9a2e12a2c5a67320a51f0d009937b5"} Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.872254 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-svrh5" event={"ID":"e513abf7-304c-47b9-8940-8f17f618e491","Type":"ContainerStarted","Data":"83876f3c268df6a9236c0a458584088172327423fd272dff808e4da3c3253581"} Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.877693 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-mdpb8" podStartSLOduration=120.877675214 podStartE2EDuration="2m0.877675214s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:00.847729155 +0000 UTC m=+141.873973219" watchObservedRunningTime="2025-12-11 18:03:00.877675214 +0000 UTC m=+141.903919258" Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.879698 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-842ws"] Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.923996 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:00 crc kubenswrapper[4877]: E1211 18:03:00.929845 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:01.429820365 +0000 UTC m=+142.456064409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.943923 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv"] Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.974506 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" podStartSLOduration=120.974476646 podStartE2EDuration="2m0.974476646s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:00.897623096 +0000 UTC m=+141.923867140" watchObservedRunningTime="2025-12-11 18:03:00.974476646 +0000 UTC m=+142.000720690" Dec 11 18:03:00 crc kubenswrapper[4877]: W1211 18:03:00.984926 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8cb7b6c_98f4_4a07_97e6_53e371e4cac3.slice/crio-22a1c2584d5f92f3c7a595031c633c22669d0150f794f12d7a6b4e567f61442d WatchSource:0}: Error finding container 22a1c2584d5f92f3c7a595031c633c22669d0150f794f12d7a6b4e567f61442d: Status 404 returned error can't find the container with id 22a1c2584d5f92f3c7a595031c633c22669d0150f794f12d7a6b4e567f61442d Dec 11 18:03:00 crc kubenswrapper[4877]: I1211 18:03:00.986609 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" podStartSLOduration=120.986591199 podStartE2EDuration="2m0.986591199s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:00.947864566 +0000 UTC m=+141.974108610" watchObservedRunningTime="2025-12-11 18:03:00.986591199 +0000 UTC m=+142.012835243" Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.035914 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:01 crc kubenswrapper[4877]: E1211 18:03:01.037411 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:01.537368663 +0000 UTC m=+142.563612707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.038686 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.046086 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7bj9b"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.048450 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-p7gb9"] Dec 11 18:03:01 crc kubenswrapper[4877]: W1211 18:03:01.074089 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bc6c140_29ac_419b_b97c_7d43b92b2cc1.slice/crio-179b4b2457038b1e1d6519165640d74aa3e29062fd1d29695f0e82162eafaacf WatchSource:0}: Error finding container 179b4b2457038b1e1d6519165640d74aa3e29062fd1d29695f0e82162eafaacf: Status 404 returned error can't find the container with id 179b4b2457038b1e1d6519165640d74aa3e29062fd1d29695f0e82162eafaacf Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.139196 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:01 crc kubenswrapper[4877]: E1211 18:03:01.139837 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:01.639818675 +0000 UTC m=+142.666062719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.147248 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.241282 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:01 crc kubenswrapper[4877]: E1211 18:03:01.241728 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:01.741701643 +0000 UTC m=+142.767945687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.251504 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tnprh"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.251618 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.253428 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ftfkv"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.320129 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.345280 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:01 crc kubenswrapper[4877]: E1211 18:03:01.345755 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:01.845736908 +0000 UTC m=+142.871980952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:01 crc kubenswrapper[4877]: W1211 18:03:01.375816 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9eec3bb_1807_465b_b272_cd2767e499d5.slice/crio-16452e1471dbb5869a06c1a5368fe865c16c3b3da2d50d73ac95c292143abe77 WatchSource:0}: Error finding container 16452e1471dbb5869a06c1a5368fe865c16c3b3da2d50d73ac95c292143abe77: Status 404 returned error can't find the container with id 16452e1471dbb5869a06c1a5368fe865c16c3b3da2d50d73ac95c292143abe77 Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.437113 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-68sbb"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.447397 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:01 crc kubenswrapper[4877]: E1211 18:03:01.447870 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:01.947848662 +0000 UTC m=+142.974092706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.468589 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-f2jq4"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.505704 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.517335 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4c92n"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.529206 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.546345 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k2xpd"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.548217 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:01 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:01 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:01 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.560332 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.549433 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:01 crc kubenswrapper[4877]: E1211 18:03:01.549725 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:02.049713549 +0000 UTC m=+143.075957593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.567365 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jwq6x"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.567754 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kntl9"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.586982 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.597008 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.629196 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sjt2h"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.631127 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.647148 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vrbhw"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.648933 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697"] Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.663633 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:01 crc kubenswrapper[4877]: E1211 18:03:01.666120 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:02.166094593 +0000 UTC m=+143.192338637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.681098 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kfghc"] Dec 11 18:03:01 crc kubenswrapper[4877]: W1211 18:03:01.715220 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a67db71_72a9_4d82_94e7_673e78b11dc6.slice/crio-ea2f9f331bdd9d9630f03aa00e5ca01a4a0c9915008a05f5b17cfa192e71f8f6 WatchSource:0}: Error finding container ea2f9f331bdd9d9630f03aa00e5ca01a4a0c9915008a05f5b17cfa192e71f8f6: Status 404 returned error can't find the container with id ea2f9f331bdd9d9630f03aa00e5ca01a4a0c9915008a05f5b17cfa192e71f8f6 Dec 11 18:03:01 crc kubenswrapper[4877]: W1211 18:03:01.758867 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99c5646a_3905_4010_9538_53090f0160f5.slice/crio-43079717f23cc8dd267c9d48a695f6c2e9945c9a4d8550fcc458e5719fc73bdc WatchSource:0}: Error finding container 43079717f23cc8dd267c9d48a695f6c2e9945c9a4d8550fcc458e5719fc73bdc: Status 404 returned error can't find the container with id 43079717f23cc8dd267c9d48a695f6c2e9945c9a4d8550fcc458e5719fc73bdc Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.765054 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:01 crc kubenswrapper[4877]: E1211 18:03:01.765625 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:02.265608948 +0000 UTC m=+143.291852992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:01 crc kubenswrapper[4877]: W1211 18:03:01.868080 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc909d56d_484a_4d56_a279_1d464fe45dc8.slice/crio-708a657dcfe973901056c42264b74fa847f4df2819e13bd06a65d677608e82ba WatchSource:0}: Error finding container 708a657dcfe973901056c42264b74fa847f4df2819e13bd06a65d677608e82ba: Status 404 returned error can't find the container with id 708a657dcfe973901056c42264b74fa847f4df2819e13bd06a65d677608e82ba Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.869715 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:01 crc kubenswrapper[4877]: E1211 18:03:01.870677 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:02.37065227 +0000 UTC m=+143.396896314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.936757 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" event={"ID":"99c5646a-3905-4010-9538-53090f0160f5","Type":"ContainerStarted","Data":"43079717f23cc8dd267c9d48a695f6c2e9945c9a4d8550fcc458e5719fc73bdc"} Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.945585 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4" event={"ID":"2f72d4ca-090e-420f-894c-d4571bdab1a3","Type":"ContainerStarted","Data":"91ab531454fb1368f753f3d1be3614fbdcc5276d92d71a5fba2daac67c3554ab"} Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.948999 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp" event={"ID":"24d8476c-b8ef-4336-b6b7-62a4faabb6a9","Type":"ContainerStarted","Data":"144080e33e3b7ad8d90de5a626f81344755b3e8f9b3d47f2903a9f8d0344572b"} Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.969530 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zp4" podStartSLOduration=121.969501936 podStartE2EDuration="2m1.969501936s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:01.966523247 +0000 UTC m=+142.992767311" watchObservedRunningTime="2025-12-11 18:03:01.969501936 +0000 UTC m=+142.995745980" Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.974518 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:01 crc kubenswrapper[4877]: E1211 18:03:01.974896 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:02.47488085 +0000 UTC m=+143.501124894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:01 crc kubenswrapper[4877]: I1211 18:03:01.987776 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" event={"ID":"6a67db71-72a9-4d82-94e7-673e78b11dc6","Type":"ContainerStarted","Data":"ea2f9f331bdd9d9630f03aa00e5ca01a4a0c9915008a05f5b17cfa192e71f8f6"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.033826 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df" event={"ID":"92aca579-224f-4b7e-9150-45ae61d15ca6","Type":"ContainerStarted","Data":"6021bd696e572abfa3513e49ea1b3009d819b8b107f08a1625e1be4bf108881b"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.061925 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" event={"ID":"098a78c0-969d-4ab8-97e7-6d8c2b6be90a","Type":"ContainerStarted","Data":"f781c961d45fbc54ba960cee33710468e092a400e6a91f4b7c86aaff90d6bdaa"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.064634 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ft6df" podStartSLOduration=122.064618093 podStartE2EDuration="2m2.064618093s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:02.064222423 +0000 UTC m=+143.090466467" watchObservedRunningTime="2025-12-11 18:03:02.064618093 +0000 UTC m=+143.090862137" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.065802 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5n5hp" podStartSLOduration=122.065794425 podStartE2EDuration="2m2.065794425s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:02.016164211 +0000 UTC m=+143.042408255" watchObservedRunningTime="2025-12-11 18:03:02.065794425 +0000 UTC m=+143.092038479" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.077205 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:02 crc kubenswrapper[4877]: E1211 18:03:02.077638 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:02.57762044 +0000 UTC m=+143.603864484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.127148 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" podStartSLOduration=122.127127741 podStartE2EDuration="2m2.127127741s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:02.125407765 +0000 UTC m=+143.151651799" watchObservedRunningTime="2025-12-11 18:03:02.127127741 +0000 UTC m=+143.153371775" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.130971 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl" event={"ID":"a8cb7b6c-98f4-4a07-97e6-53e371e4cac3","Type":"ContainerStarted","Data":"22a1c2584d5f92f3c7a595031c633c22669d0150f794f12d7a6b4e567f61442d"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.138235 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" event={"ID":"d8bc79c0-914b-4875-a155-01d87a497f69","Type":"ContainerStarted","Data":"f1919553d1c361b335c1391a0a10d62e53c7986a4b8f9498b093a2b956cf043b"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.150161 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" event={"ID":"efaee247-0579-47df-b29f-a6009d7302c3","Type":"ContainerStarted","Data":"4b046e5508474bf04b774f39deb12e8c1ee44ce298306a3f0ab274af07643619"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.158081 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f2jq4" event={"ID":"d4d0a314-0249-46d9-9bdc-af9b7e063110","Type":"ContainerStarted","Data":"d420cf0404f490f02051b307980e0ede373665e6e1245aa9cabc54d81c193172"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.163818 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7bj9b" event={"ID":"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7","Type":"ContainerStarted","Data":"846e66d9d8afdfc1057bce57efd506bf9128b5aada3f713a0369588c7320467a"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.174904 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cvvxz" event={"ID":"1cae51ed-b80c-4017-9b9f-1485a809f145","Type":"ContainerStarted","Data":"2389e0b76e148d53e9e36619bcab48cdb967f6c036f3f5efa19028dc31811df7"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.176396 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-cvvxz" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.180302 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:02 crc kubenswrapper[4877]: E1211 18:03:02.181740 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:02.681718567 +0000 UTC m=+143.707962611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.190000 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" event={"ID":"fa0265e5-9837-4f97-891a-703b0e440df3","Type":"ContainerStarted","Data":"10fd7678332ea6a77b69c2518e1376525f5dbbd2a8eaa07257dee2d7c90ac23c"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.207493 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-7bj9b" podStartSLOduration=122.207467724 podStartE2EDuration="2m2.207467724s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:02.202502741 +0000 UTC m=+143.228746785" watchObservedRunningTime="2025-12-11 18:03:02.207467724 +0000 UTC m=+143.233711768" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.216121 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.216194 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.220586 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" event={"ID":"40febc91-7d7a-4130-acbb-6c8c434033df","Type":"ContainerStarted","Data":"8a8a3ce2279969f55d6cabd8ecba5c8f396b221d058a2eaa642e44399dd748a9"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.244583 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" event={"ID":"e37c4bcf-f60b-44fa-b0c3-85eecf0e785b","Type":"ContainerStarted","Data":"fe418a3ea57e99588957de0deb5dee66ab2264b3edfec93a40962fa3929cbe91"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.271975 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-cvvxz" podStartSLOduration=122.271946664 podStartE2EDuration="2m2.271946664s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:02.269743575 +0000 UTC m=+143.295987619" watchObservedRunningTime="2025-12-11 18:03:02.271946664 +0000 UTC m=+143.298190708" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.279963 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-842ws" event={"ID":"a531323f-a1c6-4ea2-bc5d-f4872beb8b60","Type":"ContainerStarted","Data":"8365cffb945a83c864cafb2fa44297b70d3782fd93c75536e6f595dbc5c4f264"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.282949 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c9fjz" event={"ID":"b7fda896-ae7e-403b-bab7-365140566954","Type":"ContainerStarted","Data":"42ebd8ab1a797746e8cf875cbd19f20520e1345cf36e78c60f00f3e8c8d6b333"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.283006 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c9fjz" event={"ID":"b7fda896-ae7e-403b-bab7-365140566954","Type":"ContainerStarted","Data":"4242ea8277aca445d3747fa67710f3d72640003586873ed665d923b9bdab3c2f"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.285760 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh" event={"ID":"34b5f044-91ef-40f8-93f3-bfe4bb970cc6","Type":"ContainerStarted","Data":"7ab55f0f137fdd87069510b0a5e4b2ae9b6b8a3f51e7d1af77856c390bab5f7b"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.288904 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:02 crc kubenswrapper[4877]: E1211 18:03:02.289180 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:02.789115691 +0000 UTC m=+143.815359725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.289579 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:02 crc kubenswrapper[4877]: E1211 18:03:02.292136 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:02.792126962 +0000 UTC m=+143.818371006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.306037 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-c9fjz" podStartSLOduration=122.306012832 podStartE2EDuration="2m2.306012832s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:02.305224841 +0000 UTC m=+143.331468885" watchObservedRunningTime="2025-12-11 18:03:02.306012832 +0000 UTC m=+143.332256866" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.382015 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" event={"ID":"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0","Type":"ContainerStarted","Data":"95baceb1d8071d13890679c44967fbd2ecb0672ce85c1a2a9a9187a7929ba34b"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.394264 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:02 crc kubenswrapper[4877]: E1211 18:03:02.400167 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:02.900136823 +0000 UTC m=+143.926380857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.405044 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" event={"ID":"a2a6190e-6fb8-4f61-99e1-63972f85df6d","Type":"ContainerStarted","Data":"dc5310417a22c9649ca15e493b5e35f9aa6f3d67259624b9daa8683648df18f1"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.406645 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:02 crc kubenswrapper[4877]: E1211 18:03:02.407622 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:02.907602802 +0000 UTC m=+143.933846916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.446768 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q" event={"ID":"a9eec3bb-1807-465b-b272-cd2767e499d5","Type":"ContainerStarted","Data":"16452e1471dbb5869a06c1a5368fe865c16c3b3da2d50d73ac95c292143abe77"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.473994 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" podStartSLOduration=122.473965522 podStartE2EDuration="2m2.473965522s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:02.473451298 +0000 UTC m=+143.499695352" watchObservedRunningTime="2025-12-11 18:03:02.473965522 +0000 UTC m=+143.500209576" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.479968 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-sjt2h" event={"ID":"8c92f553-9f22-485e-80a4-86b223a70ef7","Type":"ContainerStarted","Data":"7d5bfa581727bc1f0f279280748e2b178429e22f0273bf087f31312452d9ce12"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.494572 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-68sbb" event={"ID":"b8c61f40-7215-401b-9e07-74b95ed041cc","Type":"ContainerStarted","Data":"886a1b4eef0ee42e24e8508fb129ff631124a20277184c3faceb58962ff02d2d"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.499201 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vrbhw" event={"ID":"217c0bda-35b4-4332-83de-9210f6906544","Type":"ContainerStarted","Data":"a1d4a26ba97d27eaf79ff641e153f7923ac8a475b306826252240c08f3188ede"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.504271 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tnprh" event={"ID":"74678aa2-d0e1-4db8-854b-4e545859f4b1","Type":"ContainerStarted","Data":"37302055098dc58ade26087dc7fd385817c8514eab397f7345697593034da69f"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.516171 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:02 crc kubenswrapper[4877]: E1211 18:03:02.517751 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:03.017716059 +0000 UTC m=+144.043960263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.519697 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" event={"ID":"1e05b4cd-2477-4d69-804f-c8dc59d6da3d","Type":"ContainerStarted","Data":"4df6c400a5f3744f0edd0239f3aad8222c3470c5ea3e958c7cadc00e250ae5c5"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.530450 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" event={"ID":"13bee5ff-144b-42e3-b5ae-01ed760c979b","Type":"ContainerStarted","Data":"44a099491e9d9d9203cc208b0b52d856584a380d489694b59e91d3223450abf4"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.574060 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:02 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:02 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:02 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.574496 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.574311 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kntl9" event={"ID":"98cc800c-b8b6-49f9-94c0-42bb0c22eb76","Type":"ContainerStarted","Data":"44ea64e8502944504f5a164f8e76c3f7a204b89caee7cf581c65e21a782eccad"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.585983 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc" event={"ID":"1dc1cc46-73f4-4e0b-a012-d9b9599ebc12","Type":"ContainerStarted","Data":"e0d985de37e777a54e16b2deaef468287a0541b4f7c596d00c31120785960735"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.586033 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc" event={"ID":"1dc1cc46-73f4-4e0b-a012-d9b9599ebc12","Type":"ContainerStarted","Data":"ed5b373e5c079f14090751d9adeaa7bd83d23fac75775a97eefaa72515d992ac"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.625414 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:02 crc kubenswrapper[4877]: E1211 18:03:02.650923 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:03.150900252 +0000 UTC m=+144.177144296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.651831 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" event={"ID":"df9b93ff-b380-4673-9fc2-6c50f0523377","Type":"ContainerStarted","Data":"a5edbc87584adf05a66d9eff4bcf379c7770ab793aabead69de616adbd702b55"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.656245 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.705776 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" event={"ID":"59572212-89da-4750-a504-ded844d647b9","Type":"ContainerStarted","Data":"a75f0957096350feedebd0ad8e3cae10db8475210d0a006e707aba30678d5c15"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.706868 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.729822 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:02 crc kubenswrapper[4877]: E1211 18:03:02.730036 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:03.229978601 +0000 UTC m=+144.256222645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.738249 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:02 crc kubenswrapper[4877]: E1211 18:03:02.738975 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:03.23895239 +0000 UTC m=+144.265196434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.755779 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.767757 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" event={"ID":"bad86dd9-d430-4692-9caf-5a7218fd02b7","Type":"ContainerStarted","Data":"fadacc7bf78d5eba078c028efb62c0b1eefabdd2be520be17fadf9bc4bf840d5"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.767851 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" event={"ID":"bad86dd9-d430-4692-9caf-5a7218fd02b7","Type":"ContainerStarted","Data":"0b407134e3b2aadcf2b13b5eb2ab6ed399ddcb64df5a290da4a5364cce317eb0"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.807143 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-p7gb9" event={"ID":"6bc6c140-29ac-419b-b97c-7d43b92b2cc1","Type":"ContainerStarted","Data":"179b4b2457038b1e1d6519165640d74aa3e29062fd1d29695f0e82162eafaacf"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.808078 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.817048 4877 patch_prober.go:28] interesting pod/console-operator-58897d9998-p7gb9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.817111 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-p7gb9" podUID="6bc6c140-29ac-419b-b97c-7d43b92b2cc1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.833900 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" event={"ID":"2af0d3b7-c390-4fce-93a7-dea3cb5325d7","Type":"ContainerStarted","Data":"4558d96b842da65bff42f2b15c0a6f0403ecdfcac11fb91dfe6ae0e883209a2d"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.833953 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.833966 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" event={"ID":"2af0d3b7-c390-4fce-93a7-dea3cb5325d7","Type":"ContainerStarted","Data":"cc41b3797eed8daabf3f865c2c3742efed2ef33893846bf5d33399175d6e7d08"} Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.840078 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:02 crc kubenswrapper[4877]: E1211 18:03:02.840504 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:03.340471358 +0000 UTC m=+144.366715402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.873245 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" podStartSLOduration=122.873223702 podStartE2EDuration="2m2.873223702s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:02.796460335 +0000 UTC m=+143.822704379" watchObservedRunningTime="2025-12-11 18:03:02.873223702 +0000 UTC m=+143.899467746" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.934073 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.943872 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.944324 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.944953 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:02 crc kubenswrapper[4877]: E1211 18:03:02.950421 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:03.450403991 +0000 UTC m=+144.476648025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.972336 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-p7gb9" podStartSLOduration=122.972314315 podStartE2EDuration="2m2.972314315s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:02.971914775 +0000 UTC m=+143.998158829" watchObservedRunningTime="2025-12-11 18:03:02.972314315 +0000 UTC m=+143.998558359" Dec 11 18:03:02 crc kubenswrapper[4877]: I1211 18:03:02.973465 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hnn72" podStartSLOduration=122.973459466 podStartE2EDuration="2m2.973459466s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:02.873721315 +0000 UTC m=+143.899965359" watchObservedRunningTime="2025-12-11 18:03:02.973459466 +0000 UTC m=+143.999703510" Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.053633 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" podStartSLOduration=123.053611034 podStartE2EDuration="2m3.053611034s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:03.052679269 +0000 UTC m=+144.078923313" watchObservedRunningTime="2025-12-11 18:03:03.053611034 +0000 UTC m=+144.079855068" Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.054424 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:03 crc kubenswrapper[4877]: E1211 18:03:03.054938 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:03.554919599 +0000 UTC m=+144.581163643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.167401 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:03 crc kubenswrapper[4877]: E1211 18:03:03.167877 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:03.667862591 +0000 UTC m=+144.694106625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.268190 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:03 crc kubenswrapper[4877]: E1211 18:03:03.268598 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:03.768582498 +0000 UTC m=+144.794826542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.369878 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:03 crc kubenswrapper[4877]: E1211 18:03:03.371104 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:03.871086602 +0000 UTC m=+144.897330646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.468462 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.472577 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:03 crc kubenswrapper[4877]: E1211 18:03:03.472794 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:03.972758234 +0000 UTC m=+144.999002278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.472870 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:03 crc kubenswrapper[4877]: E1211 18:03:03.473313 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:03.973304299 +0000 UTC m=+144.999548343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.538026 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:03 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:03 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:03 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.538113 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.574025 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:03 crc kubenswrapper[4877]: E1211 18:03:03.574523 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:04.074501438 +0000 UTC m=+145.100745482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.628847 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sjpc9" Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.677054 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:03 crc kubenswrapper[4877]: E1211 18:03:03.677823 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:04.177792513 +0000 UTC m=+145.204036747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.779287 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:03 crc kubenswrapper[4877]: E1211 18:03:03.780151 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:04.280134083 +0000 UTC m=+145.306378127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.884338 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:03 crc kubenswrapper[4877]: E1211 18:03:03.884984 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:04.384965779 +0000 UTC m=+145.411209823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.908184 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kfghc" event={"ID":"c909d56d-484a-4d56-a279-1d464fe45dc8","Type":"ContainerStarted","Data":"708a657dcfe973901056c42264b74fa847f4df2819e13bd06a65d677608e82ba"} Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.947817 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-68sbb" event={"ID":"b8c61f40-7215-401b-9e07-74b95ed041cc","Type":"ContainerStarted","Data":"244b314f3aac18244af893e2dea8fdeebf721cbc9ce3ed3f0727fd6bf8cd4c1d"} Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.981257 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" event={"ID":"a2a6190e-6fb8-4f61-99e1-63972f85df6d","Type":"ContainerStarted","Data":"e6ebd30021210088fffa446a913ef72d03abf99865ff364da66c56286a95f5c2"} Dec 11 18:03:03 crc kubenswrapper[4877]: I1211 18:03:03.990275 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:03 crc kubenswrapper[4877]: E1211 18:03:03.994127 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:04.49409175 +0000 UTC m=+145.520335794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.059388 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q" event={"ID":"a9eec3bb-1807-465b-b272-cd2767e499d5","Type":"ContainerStarted","Data":"0698376bf72a713ee2d26b7ecb127d3b15e2d4aea188f4213206bd26c32f947b"} Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.149335 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:04 crc kubenswrapper[4877]: E1211 18:03:04.151407 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:04.651391326 +0000 UTC m=+145.677635370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.181436 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-68sbb" podStartSLOduration=124.181416147 podStartE2EDuration="2m4.181416147s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:04.058831267 +0000 UTC m=+145.085075321" watchObservedRunningTime="2025-12-11 18:03:04.181416147 +0000 UTC m=+145.207660191" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.184937 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" event={"ID":"6a67db71-72a9-4d82-94e7-673e78b11dc6","Type":"ContainerStarted","Data":"837aab9a07e6fc8a2c05d84dcf48ac576987217d617adc58d21179ed522c034c"} Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.185717 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.215328 4877 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-lpcf4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.215587 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" podUID="6a67db71-72a9-4d82-94e7-673e78b11dc6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.216500 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" event={"ID":"fa0265e5-9837-4f97-891a-703b0e440df3","Type":"ContainerStarted","Data":"ce93d487a3b4c0fc8502eff471dc55d24616121f13c176a303fe1926aca1010e"} Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.217901 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.221951 4877 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k2xpd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.222044 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" podUID="fa0265e5-9837-4f97-891a-703b0e440df3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.222240 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tnprh" event={"ID":"74678aa2-d0e1-4db8-854b-4e545859f4b1","Type":"ContainerStarted","Data":"9364624af1dbe3e7abfca62beb1709939e915e633ab1b891f9031ba18bf5327d"} Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.251140 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:04 crc kubenswrapper[4877]: E1211 18:03:04.251594 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:04.751568258 +0000 UTC m=+145.777812302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.256603 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" podStartSLOduration=124.256581292 podStartE2EDuration="2m4.256581292s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:04.251853615 +0000 UTC m=+145.278097659" watchObservedRunningTime="2025-12-11 18:03:04.256581292 +0000 UTC m=+145.282825336" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.257018 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4s29q" podStartSLOduration=124.257012243 podStartE2EDuration="2m4.257012243s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:04.202812327 +0000 UTC m=+145.229056371" watchObservedRunningTime="2025-12-11 18:03:04.257012243 +0000 UTC m=+145.283256287" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.286959 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f2jq4" event={"ID":"d4d0a314-0249-46d9-9bdc-af9b7e063110","Type":"ContainerStarted","Data":"d4b2c9bede02d416ce99bd42005ad81d8af10682e8736156dce405898e37331f"} Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.326505 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" event={"ID":"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0","Type":"ContainerStarted","Data":"9771223fa989ae1c8962d5d676ac055ff029694ab100ee70bf9d920157a94370"} Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.355628 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:04 crc kubenswrapper[4877]: E1211 18:03:04.358536 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:04.858513111 +0000 UTC m=+145.884757335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.389784 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" podStartSLOduration=124.389760214 podStartE2EDuration="2m4.389760214s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:04.389178069 +0000 UTC m=+145.415422123" watchObservedRunningTime="2025-12-11 18:03:04.389760214 +0000 UTC m=+145.416004258" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.390542 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tnprh" podStartSLOduration=9.390536285 podStartE2EDuration="9.390536285s" podCreationTimestamp="2025-12-11 18:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:04.326027644 +0000 UTC m=+145.352271708" watchObservedRunningTime="2025-12-11 18:03:04.390536285 +0000 UTC m=+145.416780329" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.391178 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7bj9b" event={"ID":"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7","Type":"ContainerStarted","Data":"82ea4e140a60382319cfb269dab3199169d6ec642a5fc6c7ad7a718f2464e11f"} Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.430773 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-842ws" event={"ID":"a531323f-a1c6-4ea2-bc5d-f4872beb8b60","Type":"ContainerStarted","Data":"5c53ece957082431a1fc275fab080498937819e47f0e995e425c84a02413f925"} Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.452523 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-sjt2h" podStartSLOduration=124.452472607 podStartE2EDuration="2m4.452472607s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:04.451070449 +0000 UTC m=+145.477314523" watchObservedRunningTime="2025-12-11 18:03:04.452472607 +0000 UTC m=+145.478716661" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.459227 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:04 crc kubenswrapper[4877]: E1211 18:03:04.460775 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:04.960732197 +0000 UTC m=+145.986976241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.503906 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl" event={"ID":"a8cb7b6c-98f4-4a07-97e6-53e371e4cac3","Type":"ContainerStarted","Data":"67a9cba8f169a16d40934ae4952205df29f5d8cad9e517c495a136b3cde163c8"} Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.520364 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh" event={"ID":"34b5f044-91ef-40f8-93f3-bfe4bb970cc6","Type":"ContainerStarted","Data":"0c34414f31294f3b11e960d1aba9b145e21a4e90618520de74fc49ba5730a26d"} Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.524166 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" event={"ID":"efaee247-0579-47df-b29f-a6009d7302c3","Type":"ContainerStarted","Data":"bb477db8e41d039da8540e388f8cc11337a52b48c122101f15a1de071bd8cc88"} Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.527184 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:04 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:04 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:04 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.527478 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.565751 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-p7gb9" event={"ID":"6bc6c140-29ac-419b-b97c-7d43b92b2cc1","Type":"ContainerStarted","Data":"e74774aaa030d7324760459fc3a99fb892144c2e53164738a09ac73700d60dc3"} Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.566242 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.568458 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:03:04 crc kubenswrapper[4877]: E1211 18:03:04.568556 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:05.068532892 +0000 UTC m=+146.094776936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.568577 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.576968 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f2jq4" podStartSLOduration=124.576946967 podStartE2EDuration="2m4.576946967s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:04.546820023 +0000 UTC m=+145.573064067" watchObservedRunningTime="2025-12-11 18:03:04.576946967 +0000 UTC m=+145.603191021" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.589837 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nvhsm" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.668111 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:04 crc kubenswrapper[4877]: E1211 18:03:04.669555 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:05.169519055 +0000 UTC m=+146.195763159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.720530 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gxtrh" podStartSLOduration=124.720504795 podStartE2EDuration="2m4.720504795s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:04.688128831 +0000 UTC m=+145.714372875" watchObservedRunningTime="2025-12-11 18:03:04.720504795 +0000 UTC m=+145.746748839" Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.770685 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:04 crc kubenswrapper[4877]: E1211 18:03:04.771086 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:05.271072574 +0000 UTC m=+146.297316618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:04 crc kubenswrapper[4877]: I1211 18:03:04.872744 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:04 crc kubenswrapper[4877]: E1211 18:03:04.873704 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:05.373680101 +0000 UTC m=+146.399924135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.014032 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:05 crc kubenswrapper[4877]: E1211 18:03:05.014607 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:05.514586989 +0000 UTC m=+146.540831033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.031035 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" podStartSLOduration=125.031002477 podStartE2EDuration="2m5.031002477s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:04.961653817 +0000 UTC m=+145.987897861" watchObservedRunningTime="2025-12-11 18:03:05.031002477 +0000 UTC m=+146.057246541" Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.031654 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4jvkl" podStartSLOduration=125.031643244 podStartE2EDuration="2m5.031643244s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:04.799931354 +0000 UTC m=+145.826175408" watchObservedRunningTime="2025-12-11 18:03:05.031643244 +0000 UTC m=+146.057887288" Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.123270 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:05 crc kubenswrapper[4877]: E1211 18:03:05.124159 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:05.624111921 +0000 UTC m=+146.650355965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.233876 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:05 crc kubenswrapper[4877]: E1211 18:03:05.234422 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:05.734404223 +0000 UTC m=+146.760648267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.336136 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:05 crc kubenswrapper[4877]: E1211 18:03:05.336633 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:05.836588728 +0000 UTC m=+146.862832772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.438829 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:05 crc kubenswrapper[4877]: E1211 18:03:05.439249 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:05.939235016 +0000 UTC m=+146.965479060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.482192 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-p7gb9" Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.528728 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:05 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:05 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:05 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.529242 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.548129 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:05 crc kubenswrapper[4877]: E1211 18:03:05.548568 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.048545152 +0000 UTC m=+147.074789196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.613751 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" event={"ID":"13bee5ff-144b-42e3-b5ae-01ed760c979b","Type":"ContainerStarted","Data":"d64feb98027fe97f208a863091be1a4560b631883b6d7ba1ba8556212fc0ecee"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.628602 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kntl9" event={"ID":"98cc800c-b8b6-49f9-94c0-42bb0c22eb76","Type":"ContainerStarted","Data":"b6dd2d73c2c5cb89ce9584f9223fa9a843560258cd4bc109c3c8a929795868a8"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.643538 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc" event={"ID":"1dc1cc46-73f4-4e0b-a012-d9b9599ebc12","Type":"ContainerStarted","Data":"4aee4e21ecc05fe6c4f642fa5e5e2559c19a47cae3aa30798024738557a9a3ab"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.644234 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc" Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.649238 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:05 crc kubenswrapper[4877]: E1211 18:03:05.649572 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.149558457 +0000 UTC m=+147.175802501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.670685 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" event={"ID":"99c5646a-3905-4010-9538-53090f0160f5","Type":"ContainerStarted","Data":"63ee72a1c19721bc97a9384c6b7b2ceb1b6464f96bc15359ebd82d6e1a073645"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.670752 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" event={"ID":"99c5646a-3905-4010-9538-53090f0160f5","Type":"ContainerStarted","Data":"b9b9983199eb3f8948f62815767ce69cd489a9aeee2bce248071a8caaf3cc85c"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.694953 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f2jq4" event={"ID":"d4d0a314-0249-46d9-9bdc-af9b7e063110","Type":"ContainerStarted","Data":"2d4e9f45d9451165644c6048192dd5bc0e0ff31507f57d741fbcf4d6aa9ce698"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.713418 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kntl9" podStartSLOduration=125.713393909 podStartE2EDuration="2m5.713393909s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:05.711835498 +0000 UTC m=+146.738079542" watchObservedRunningTime="2025-12-11 18:03:05.713393909 +0000 UTC m=+146.739637943" Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.713818 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-ftfkv" podStartSLOduration=125.71381127 podStartE2EDuration="2m5.71381127s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:05.664139876 +0000 UTC m=+146.690383920" watchObservedRunningTime="2025-12-11 18:03:05.71381127 +0000 UTC m=+146.740055324" Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.747290 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" event={"ID":"1e05b4cd-2477-4d69-804f-c8dc59d6da3d","Type":"ContainerStarted","Data":"1aba5dc6af14c722a0cf222d25f126c60746ddb9578d02fde5355751b401799b"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.747345 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" event={"ID":"1e05b4cd-2477-4d69-804f-c8dc59d6da3d","Type":"ContainerStarted","Data":"a3d0603b043ca71ee767a13d943a48354b4b92a50d0c60e7b83c51311e7773ff"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.752149 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:05 crc kubenswrapper[4877]: E1211 18:03:05.753060 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.253018926 +0000 UTC m=+147.279262970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.763845 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc" podStartSLOduration=125.763811014 podStartE2EDuration="2m5.763811014s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:05.742926237 +0000 UTC m=+146.769170281" watchObservedRunningTime="2025-12-11 18:03:05.763811014 +0000 UTC m=+146.790055058" Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.778602 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" event={"ID":"43bf03d3-95e6-47a5-8384-6f7cd56f2cc0","Type":"ContainerStarted","Data":"01990ab27fba04149bb4b26c48167b97d1ffdb237e10d7fe09551f34e8e6bf6e"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.835410 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p68fz" podStartSLOduration=125.835350392 podStartE2EDuration="2m5.835350392s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:05.800296207 +0000 UTC m=+146.826540251" watchObservedRunningTime="2025-12-11 18:03:05.835350392 +0000 UTC m=+146.861594426" Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.854618 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vrbhw" event={"ID":"217c0bda-35b4-4332-83de-9210f6906544","Type":"ContainerStarted","Data":"6f85183ffba036c9768f9be159047f891f0fb940cff94b794404083f880e8c97"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.854676 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vrbhw" event={"ID":"217c0bda-35b4-4332-83de-9210f6906544","Type":"ContainerStarted","Data":"a67c60e7dce97f7d468f1bcf3f18c9fd41cd2f848d701dec15f0ee4347725f33"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.855499 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-vrbhw" Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.856293 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:05 crc kubenswrapper[4877]: E1211 18:03:05.866037 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.36601317 +0000 UTC m=+147.392257204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.877954 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" event={"ID":"d8bc79c0-914b-4875-a155-01d87a497f69","Type":"ContainerStarted","Data":"60b81d1adebcc754e4b60bdaa39f6cb885ccf322e97cd170c4fe6e4f2edac254"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.881567 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" event={"ID":"40febc91-7d7a-4130-acbb-6c8c434033df","Type":"ContainerStarted","Data":"48e66fe4897c0956497477f315e14010dc0e2f3370b771213c9840871dc49c4d"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.882452 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.889715 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4c92n" podStartSLOduration=125.889687802 podStartE2EDuration="2m5.889687802s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:05.888425078 +0000 UTC m=+146.914669122" watchObservedRunningTime="2025-12-11 18:03:05.889687802 +0000 UTC m=+146.915931856" Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.890350 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tbhjv" podStartSLOduration=125.890344739 podStartE2EDuration="2m5.890344739s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:05.836437791 +0000 UTC m=+146.862681855" watchObservedRunningTime="2025-12-11 18:03:05.890344739 +0000 UTC m=+146.916588783" Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.918156 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" event={"ID":"bad86dd9-d430-4692-9caf-5a7218fd02b7","Type":"ContainerStarted","Data":"cc24901b3689021e005019766c974b6f055ff5991fd06c4181b5ea81abbefffd"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.927956 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" podStartSLOduration=125.927937252 podStartE2EDuration="2m5.927937252s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:05.927513331 +0000 UTC m=+146.953757385" watchObservedRunningTime="2025-12-11 18:03:05.927937252 +0000 UTC m=+146.954181296" Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.953695 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" event={"ID":"e37c4bcf-f60b-44fa-b0c3-85eecf0e785b","Type":"ContainerStarted","Data":"d930286b5bf0518d033154116acdff486d79796c92dd2a09bc73d26802a9f218"} Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.957570 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:05 crc kubenswrapper[4877]: E1211 18:03:05.959288 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.459266188 +0000 UTC m=+147.485510232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:05 crc kubenswrapper[4877]: I1211 18:03:05.997465 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-vrbhw" podStartSLOduration=10.997430806 podStartE2EDuration="10.997430806s" podCreationTimestamp="2025-12-11 18:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:05.985261761 +0000 UTC m=+147.011505805" watchObservedRunningTime="2025-12-11 18:03:05.997430806 +0000 UTC m=+147.023674850" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.013634 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kfghc" event={"ID":"c909d56d-484a-4d56-a279-1d464fe45dc8","Type":"ContainerStarted","Data":"47b11aba95e226480c12139e8cbcb437b7c8d501c0405e7ba6dfc89c1a395d13"} Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.031040 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-sjt2h" event={"ID":"8c92f553-9f22-485e-80a4-86b223a70ef7","Type":"ContainerStarted","Data":"952c2b02c3639734c844b6fa5aa174efb92396ff7f8365ed1f2a5380123d9a3a"} Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.036767 4877 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k2xpd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.036839 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" podUID="fa0265e5-9837-4f97-891a-703b0e440df3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.038766 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jwq6x" podStartSLOduration=126.038743958 podStartE2EDuration="2m6.038743958s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:06.037362901 +0000 UTC m=+147.063606965" watchObservedRunningTime="2025-12-11 18:03:06.038743958 +0000 UTC m=+147.064988012" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.058873 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.060095 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.075653 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:06 crc kubenswrapper[4877]: E1211 18:03:06.083145 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.583097161 +0000 UTC m=+147.609341205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.141202 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4pjpv" podStartSLOduration=126.14118095 podStartE2EDuration="2m6.14118095s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:06.139834034 +0000 UTC m=+147.166078078" watchObservedRunningTime="2025-12-11 18:03:06.14118095 +0000 UTC m=+147.167424984" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.183061 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:06 crc kubenswrapper[4877]: E1211 18:03:06.185093 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.685071931 +0000 UTC m=+147.711315975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.199888 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-kfghc" podStartSLOduration=126.199867595 podStartE2EDuration="2m6.199867595s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:06.197002909 +0000 UTC m=+147.223246953" watchObservedRunningTime="2025-12-11 18:03:06.199867595 +0000 UTC m=+147.226111639" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.224126 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lpcf4" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.270600 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2c697" podStartSLOduration=126.270580122 podStartE2EDuration="2m6.270580122s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:06.265920997 +0000 UTC m=+147.292165041" watchObservedRunningTime="2025-12-11 18:03:06.270580122 +0000 UTC m=+147.296824166" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.288395 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:06 crc kubenswrapper[4877]: E1211 18:03:06.288773 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.788759517 +0000 UTC m=+147.815003561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.389637 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.389871 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:03:06 crc kubenswrapper[4877]: E1211 18:03:06.390265 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.890226233 +0000 UTC m=+147.916470277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.390461 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.390582 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.390728 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.390844 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.390990 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:03:06 crc kubenswrapper[4877]: E1211 18:03:06.391163 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.891145608 +0000 UTC m=+147.917389832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.433698 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.442618 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.444493 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.455641 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.497775 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:06 crc kubenswrapper[4877]: E1211 18:03:06.498221 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:06.998204683 +0000 UTC m=+148.024448727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.532869 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:06 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:06 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:06 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.532931 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.599261 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:06 crc kubenswrapper[4877]: E1211 18:03:06.599856 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:07.099835884 +0000 UTC m=+148.126079928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.647943 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.648330 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.703084 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:06 crc kubenswrapper[4877]: E1211 18:03:06.703744 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:07.203727455 +0000 UTC m=+148.229971499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.742091 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.744895 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.806318 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:06 crc kubenswrapper[4877]: E1211 18:03:06.821936 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:07.321913018 +0000 UTC m=+148.348157052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.887807 4877 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-gx8gb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.887899 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" podUID="40febc91-7d7a-4130-acbb-6c8c434033df" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 18:03:06 crc kubenswrapper[4877]: I1211 18:03:06.907539 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:06 crc kubenswrapper[4877]: E1211 18:03:06.908026 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:07.408004314 +0000 UTC m=+148.434248358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.009454 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:07 crc kubenswrapper[4877]: E1211 18:03:07.009918 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:07.509903012 +0000 UTC m=+148.536147056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.036019 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b57c5887ca33c30a78fb2d604a2443b72a23e11d39f3e69b43fa317b8e8ac2e2"} Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.038638 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kfghc" event={"ID":"c909d56d-484a-4d56-a279-1d464fe45dc8","Type":"ContainerStarted","Data":"c24553c88cab19df59faa332cce73ab8015298e4e6e386c2b532806cb30fa0b2"} Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.040753 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-842ws" event={"ID":"a531323f-a1c6-4ea2-bc5d-f4872beb8b60","Type":"ContainerStarted","Data":"09a758009488c2d47e3e267c11a3b3bc663a7815677583c83cd39404c19d6212"} Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.044339 4877 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k2xpd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.044406 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" podUID="fa0265e5-9837-4f97-891a-703b0e440df3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.103187 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cxf6j"] Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.104217 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.110871 4877 patch_prober.go:28] interesting pod/apiserver-76f77b778f-7ffdw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 18:03:07 crc kubenswrapper[4877]: [+]log ok Dec 11 18:03:07 crc kubenswrapper[4877]: [+]etcd ok Dec 11 18:03:07 crc kubenswrapper[4877]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 18:03:07 crc kubenswrapper[4877]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 18:03:07 crc kubenswrapper[4877]: [+]poststarthook/max-in-flight-filter ok Dec 11 18:03:07 crc kubenswrapper[4877]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 18:03:07 crc kubenswrapper[4877]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 18:03:07 crc kubenswrapper[4877]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 11 18:03:07 crc kubenswrapper[4877]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 11 18:03:07 crc kubenswrapper[4877]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 18:03:07 crc kubenswrapper[4877]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 18:03:07 crc kubenswrapper[4877]: [+]poststarthook/openshift.io-startinformers ok Dec 11 18:03:07 crc kubenswrapper[4877]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 18:03:07 crc kubenswrapper[4877]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 18:03:07 crc kubenswrapper[4877]: livez check failed Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.110971 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" podUID="efaee247-0579-47df-b29f-a6009d7302c3" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.112395 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:07 crc kubenswrapper[4877]: E1211 18:03:07.115648 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:07.615618462 +0000 UTC m=+148.641862506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.124722 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.172672 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cxf6j"] Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.220692 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7qpm\" (UniqueName: \"kubernetes.io/projected/14091be0-96bc-40cf-ae0e-ba1a5de910ee-kube-api-access-w7qpm\") pod \"certified-operators-cxf6j\" (UID: \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\") " pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.220733 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14091be0-96bc-40cf-ae0e-ba1a5de910ee-catalog-content\") pod \"certified-operators-cxf6j\" (UID: \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\") " pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.220773 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14091be0-96bc-40cf-ae0e-ba1a5de910ee-utilities\") pod \"certified-operators-cxf6j\" (UID: \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\") " pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.220813 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:07 crc kubenswrapper[4877]: E1211 18:03:07.221140 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:07.721126056 +0000 UTC m=+148.747370100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.327242 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.328151 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7qpm\" (UniqueName: \"kubernetes.io/projected/14091be0-96bc-40cf-ae0e-ba1a5de910ee-kube-api-access-w7qpm\") pod \"certified-operators-cxf6j\" (UID: \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\") " pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.328174 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14091be0-96bc-40cf-ae0e-ba1a5de910ee-catalog-content\") pod \"certified-operators-cxf6j\" (UID: \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\") " pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.328220 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14091be0-96bc-40cf-ae0e-ba1a5de910ee-utilities\") pod \"certified-operators-cxf6j\" (UID: \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\") " pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.328696 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14091be0-96bc-40cf-ae0e-ba1a5de910ee-utilities\") pod \"certified-operators-cxf6j\" (UID: \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\") " pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:03:07 crc kubenswrapper[4877]: E1211 18:03:07.328783 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:07.828763257 +0000 UTC m=+148.855007301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.329338 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14091be0-96bc-40cf-ae0e-ba1a5de910ee-catalog-content\") pod \"certified-operators-cxf6j\" (UID: \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\") " pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.405896 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pvh2t"] Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.417724 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pvh2t" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.427171 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pvh2t"] Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.432313 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.444159 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:07 crc kubenswrapper[4877]: E1211 18:03:07.444731 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:07.9447106 +0000 UTC m=+148.970954654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.451414 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7qpm\" (UniqueName: \"kubernetes.io/projected/14091be0-96bc-40cf-ae0e-ba1a5de910ee-kube-api-access-w7qpm\") pod \"certified-operators-cxf6j\" (UID: \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\") " pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.508034 4877 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.529198 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:07 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:07 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:07 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.529293 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.559065 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.559331 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl852\" (UniqueName: \"kubernetes.io/projected/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-kube-api-access-jl852\") pod \"community-operators-pvh2t\" (UID: \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\") " pod="openshift-marketplace/community-operators-pvh2t" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.559408 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-utilities\") pod \"community-operators-pvh2t\" (UID: \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\") " pod="openshift-marketplace/community-operators-pvh2t" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.559443 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-catalog-content\") pod \"community-operators-pvh2t\" (UID: \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\") " pod="openshift-marketplace/community-operators-pvh2t" Dec 11 18:03:07 crc kubenswrapper[4877]: E1211 18:03:07.559557 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:08.059535343 +0000 UTC m=+149.085779387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.560131 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.615648 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tq2vm"] Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.617227 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq2vm" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.664315 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-catalog-content\") pod \"community-operators-pvh2t\" (UID: \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\") " pod="openshift-marketplace/community-operators-pvh2t" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.664409 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.664442 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl852\" (UniqueName: \"kubernetes.io/projected/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-kube-api-access-jl852\") pod \"community-operators-pvh2t\" (UID: \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\") " pod="openshift-marketplace/community-operators-pvh2t" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.664500 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-utilities\") pod \"community-operators-pvh2t\" (UID: \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\") " pod="openshift-marketplace/community-operators-pvh2t" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.665549 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-utilities\") pod \"community-operators-pvh2t\" (UID: \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\") " pod="openshift-marketplace/community-operators-pvh2t" Dec 11 18:03:07 crc kubenswrapper[4877]: E1211 18:03:07.665926 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:08.16590297 +0000 UTC m=+149.192147014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.666133 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-catalog-content\") pod \"community-operators-pvh2t\" (UID: \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\") " pod="openshift-marketplace/community-operators-pvh2t" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.712569 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx8gb" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.749408 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq2vm"] Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.767290 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.767640 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-utilities\") pod \"certified-operators-tq2vm\" (UID: \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\") " pod="openshift-marketplace/certified-operators-tq2vm" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.767708 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qplr\" (UniqueName: \"kubernetes.io/projected/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-kube-api-access-7qplr\") pod \"certified-operators-tq2vm\" (UID: \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\") " pod="openshift-marketplace/certified-operators-tq2vm" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.767730 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-catalog-content\") pod \"certified-operators-tq2vm\" (UID: \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\") " pod="openshift-marketplace/certified-operators-tq2vm" Dec 11 18:03:07 crc kubenswrapper[4877]: E1211 18:03:07.767862 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:08.267841219 +0000 UTC m=+149.294085263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.799616 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl852\" (UniqueName: \"kubernetes.io/projected/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-kube-api-access-jl852\") pod \"community-operators-pvh2t\" (UID: \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\") " pod="openshift-marketplace/community-operators-pvh2t" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.811959 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2rstw"] Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.813742 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.835713 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pvh2t" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.842815 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2rstw"] Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.868901 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qplr\" (UniqueName: \"kubernetes.io/projected/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-kube-api-access-7qplr\") pod \"certified-operators-tq2vm\" (UID: \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\") " pod="openshift-marketplace/certified-operators-tq2vm" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.868948 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-catalog-content\") pod \"certified-operators-tq2vm\" (UID: \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\") " pod="openshift-marketplace/certified-operators-tq2vm" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.868982 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5400698a-74be-440a-9e76-ae18bc00d85b-utilities\") pod \"community-operators-2rstw\" (UID: \"5400698a-74be-440a-9e76-ae18bc00d85b\") " pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.869006 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5400698a-74be-440a-9e76-ae18bc00d85b-catalog-content\") pod \"community-operators-2rstw\" (UID: \"5400698a-74be-440a-9e76-ae18bc00d85b\") " pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.869049 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.869086 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-utilities\") pod \"certified-operators-tq2vm\" (UID: \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\") " pod="openshift-marketplace/certified-operators-tq2vm" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.869134 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6khs\" (UniqueName: \"kubernetes.io/projected/5400698a-74be-440a-9e76-ae18bc00d85b-kube-api-access-d6khs\") pod \"community-operators-2rstw\" (UID: \"5400698a-74be-440a-9e76-ae18bc00d85b\") " pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.870097 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-catalog-content\") pod \"certified-operators-tq2vm\" (UID: \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\") " pod="openshift-marketplace/certified-operators-tq2vm" Dec 11 18:03:07 crc kubenswrapper[4877]: E1211 18:03:07.870519 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:08.370503478 +0000 UTC m=+149.396747522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.870922 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-utilities\") pod \"certified-operators-tq2vm\" (UID: \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\") " pod="openshift-marketplace/certified-operators-tq2vm" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.938236 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qplr\" (UniqueName: \"kubernetes.io/projected/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-kube-api-access-7qplr\") pod \"certified-operators-tq2vm\" (UID: \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\") " pod="openshift-marketplace/certified-operators-tq2vm" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.978806 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.979172 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5400698a-74be-440a-9e76-ae18bc00d85b-utilities\") pod \"community-operators-2rstw\" (UID: \"5400698a-74be-440a-9e76-ae18bc00d85b\") " pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.979201 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5400698a-74be-440a-9e76-ae18bc00d85b-catalog-content\") pod \"community-operators-2rstw\" (UID: \"5400698a-74be-440a-9e76-ae18bc00d85b\") " pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.979273 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6khs\" (UniqueName: \"kubernetes.io/projected/5400698a-74be-440a-9e76-ae18bc00d85b-kube-api-access-d6khs\") pod \"community-operators-2rstw\" (UID: \"5400698a-74be-440a-9e76-ae18bc00d85b\") " pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:03:07 crc kubenswrapper[4877]: E1211 18:03:07.979485 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:08.479453304 +0000 UTC m=+149.505697348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.979775 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5400698a-74be-440a-9e76-ae18bc00d85b-utilities\") pod \"community-operators-2rstw\" (UID: \"5400698a-74be-440a-9e76-ae18bc00d85b\") " pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:03:07 crc kubenswrapper[4877]: I1211 18:03:07.984278 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5400698a-74be-440a-9e76-ae18bc00d85b-catalog-content\") pod \"community-operators-2rstw\" (UID: \"5400698a-74be-440a-9e76-ae18bc00d85b\") " pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.034435 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq2vm" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.069977 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e07c2b68bed7d3bdabb94d4de7015140d5b95d185f5ad58ae32797faf503e210"} Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.074573 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6khs\" (UniqueName: \"kubernetes.io/projected/5400698a-74be-440a-9e76-ae18bc00d85b-kube-api-access-d6khs\") pod \"community-operators-2rstw\" (UID: \"5400698a-74be-440a-9e76-ae18bc00d85b\") " pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.081108 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:08 crc kubenswrapper[4877]: E1211 18:03:08.081701 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:08.58166721 +0000 UTC m=+149.607911254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.093616 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"71c9217cb732109dfc0171f965f030bea28fa294d2142c179bbd114cd1b01dea"} Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.093688 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.142992 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.143286 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.143782 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.143833 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.144129 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.144151 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.151170 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-842ws" event={"ID":"a531323f-a1c6-4ea2-bc5d-f4872beb8b60","Type":"ContainerStarted","Data":"e03154bfa617bf9f236d6980758812b405f668ac142557307387a1b84dd9d896"} Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.165801 4877 patch_prober.go:28] interesting pod/console-f9d7485db-7bj9b container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.165865 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-7bj9b" podUID="88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.166729 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"888212ba5d401b5fda3a98f6218729991ddf4881c81309fd460bdbc7f5d270bd"} Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.199958 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:08 crc kubenswrapper[4877]: E1211 18:03:08.200618 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:08.700571081 +0000 UTC m=+149.726815125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.265746 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.305400 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:08 crc kubenswrapper[4877]: E1211 18:03:08.306863 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:08.806838185 +0000 UTC m=+149.833082219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.433264 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:08 crc kubenswrapper[4877]: E1211 18:03:08.433537 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 18:03:08.933503834 +0000 UTC m=+149.959747878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.448796 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:08 crc kubenswrapper[4877]: E1211 18:03:08.449485 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 18:03:08.94946364 +0000 UTC m=+149.975707684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89rk4" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.482443 4877 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-11T18:03:07.50807757Z","Handler":null,"Name":""} Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.504421 4877 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.504487 4877 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.541496 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.550080 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.557992 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:08 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:08 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:08 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.558056 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.558729 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.567751 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.633566 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cxf6j"] Dec 11 18:03:08 crc kubenswrapper[4877]: W1211 18:03:08.652012 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14091be0_96bc_40cf_ae0e_ba1a5de910ee.slice/crio-a27b3b91f77aa9a891ea83fae5db7b18047da23de15e95ab729d513f009f7242 WatchSource:0}: Error finding container a27b3b91f77aa9a891ea83fae5db7b18047da23de15e95ab729d513f009f7242: Status 404 returned error can't find the container with id a27b3b91f77aa9a891ea83fae5db7b18047da23de15e95ab729d513f009f7242 Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.653034 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.675767 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.676610 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.682345 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.683336 4877 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.683397 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.685508 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.689832 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.849363 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pvh2t"] Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.860995 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c188a9a5-5f06-4775-b907-9d9a09716c3a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c188a9a5-5f06-4775-b907-9d9a09716c3a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.861091 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c188a9a5-5f06-4775-b907-9d9a09716c3a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c188a9a5-5f06-4775-b907-9d9a09716c3a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.962909 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c188a9a5-5f06-4775-b907-9d9a09716c3a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c188a9a5-5f06-4775-b907-9d9a09716c3a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.963018 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c188a9a5-5f06-4775-b907-9d9a09716c3a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c188a9a5-5f06-4775-b907-9d9a09716c3a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.963144 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c188a9a5-5f06-4775-b907-9d9a09716c3a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c188a9a5-5f06-4775-b907-9d9a09716c3a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.972847 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89rk4\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:08 crc kubenswrapper[4877]: I1211 18:03:08.997205 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c188a9a5-5f06-4775-b907-9d9a09716c3a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c188a9a5-5f06-4775-b907-9d9a09716c3a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.014839 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.169691 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2rstw"] Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.232775 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.253887 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.254775 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxf6j" event={"ID":"14091be0-96bc-40cf-ae0e-ba1a5de910ee","Type":"ContainerStarted","Data":"372f930116a6abf8ee14b666ad7499a82306bbdf49bacb074c5545f492c07d5a"} Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.254802 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxf6j" event={"ID":"14091be0-96bc-40cf-ae0e-ba1a5de910ee","Type":"ContainerStarted","Data":"a27b3b91f77aa9a891ea83fae5db7b18047da23de15e95ab729d513f009f7242"} Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.254816 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-842ws" event={"ID":"a531323f-a1c6-4ea2-bc5d-f4872beb8b60","Type":"ContainerStarted","Data":"8cb1cc700ba60bb8e784dc35d0d5844f81e03ceb8d3ff0e08e1636f466c391a8"} Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.263368 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq2vm"] Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.271350 4877 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.275319 4877 generic.go:334] "Generic (PLEG): container finished" podID="2f3a3c00-9466-4501-b1ee-12d677fc3b4e" containerID="2afe35ee1834b4918f1ef2bfe7e4f132f3952eab6b7cf62a1168dd4a82dc007a" exitCode=0 Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.275416 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvh2t" event={"ID":"2f3a3c00-9466-4501-b1ee-12d677fc3b4e","Type":"ContainerDied","Data":"2afe35ee1834b4918f1ef2bfe7e4f132f3952eab6b7cf62a1168dd4a82dc007a"} Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.275448 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvh2t" event={"ID":"2f3a3c00-9466-4501-b1ee-12d677fc3b4e","Type":"ContainerStarted","Data":"3771f30b8ac7e181239d9669a1c3b69438c3a2e24301d75b8504c324e6964b41"} Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.293219 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7fwxl"] Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.295251 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fwxl" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.296823 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"34c2685ebf83fe686608eea8d0f0762b64cc6c928bb331eb49070908174a0ca3"} Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.302941 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.380930 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"402a8f7ba9afc4c3351a37358886bc16a388e73006a521cbcb84b11d6f677d66"} Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.446483 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fwxl"] Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.537194 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:09 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:09 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:09 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.537261 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.556679 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c00914d-bacb-4706-9ca8-8897c4c0a544-utilities\") pod \"redhat-marketplace-7fwxl\" (UID: \"6c00914d-bacb-4706-9ca8-8897c4c0a544\") " pod="openshift-marketplace/redhat-marketplace-7fwxl" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.556750 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c00914d-bacb-4706-9ca8-8897c4c0a544-catalog-content\") pod \"redhat-marketplace-7fwxl\" (UID: \"6c00914d-bacb-4706-9ca8-8897c4c0a544\") " pod="openshift-marketplace/redhat-marketplace-7fwxl" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.556931 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk6dd\" (UniqueName: \"kubernetes.io/projected/6c00914d-bacb-4706-9ca8-8897c4c0a544-kube-api-access-pk6dd\") pod \"redhat-marketplace-7fwxl\" (UID: \"6c00914d-bacb-4706-9ca8-8897c4c0a544\") " pod="openshift-marketplace/redhat-marketplace-7fwxl" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.597990 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-842ws" podStartSLOduration=14.597964225 podStartE2EDuration="14.597964225s" podCreationTimestamp="2025-12-11 18:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:09.570345418 +0000 UTC m=+150.596589462" watchObservedRunningTime="2025-12-11 18:03:09.597964225 +0000 UTC m=+150.624208269" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.674959 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c00914d-bacb-4706-9ca8-8897c4c0a544-catalog-content\") pod \"redhat-marketplace-7fwxl\" (UID: \"6c00914d-bacb-4706-9ca8-8897c4c0a544\") " pod="openshift-marketplace/redhat-marketplace-7fwxl" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.675044 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk6dd\" (UniqueName: \"kubernetes.io/projected/6c00914d-bacb-4706-9ca8-8897c4c0a544-kube-api-access-pk6dd\") pod \"redhat-marketplace-7fwxl\" (UID: \"6c00914d-bacb-4706-9ca8-8897c4c0a544\") " pod="openshift-marketplace/redhat-marketplace-7fwxl" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.675104 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c00914d-bacb-4706-9ca8-8897c4c0a544-utilities\") pod \"redhat-marketplace-7fwxl\" (UID: \"6c00914d-bacb-4706-9ca8-8897c4c0a544\") " pod="openshift-marketplace/redhat-marketplace-7fwxl" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.675613 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c00914d-bacb-4706-9ca8-8897c4c0a544-utilities\") pod \"redhat-marketplace-7fwxl\" (UID: \"6c00914d-bacb-4706-9ca8-8897c4c0a544\") " pod="openshift-marketplace/redhat-marketplace-7fwxl" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.675851 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c00914d-bacb-4706-9ca8-8897c4c0a544-catalog-content\") pod \"redhat-marketplace-7fwxl\" (UID: \"6c00914d-bacb-4706-9ca8-8897c4c0a544\") " pod="openshift-marketplace/redhat-marketplace-7fwxl" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.709497 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rhhb6"] Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.710975 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhhb6" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.744677 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk6dd\" (UniqueName: \"kubernetes.io/projected/6c00914d-bacb-4706-9ca8-8897c4c0a544-kube-api-access-pk6dd\") pod \"redhat-marketplace-7fwxl\" (UID: \"6c00914d-bacb-4706-9ca8-8897c4c0a544\") " pod="openshift-marketplace/redhat-marketplace-7fwxl" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.774824 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhhb6"] Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.803564 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.804940 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.806038 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.811256 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.811599 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.829831 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.881038 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8510a485-0606-413b-913c-133c3045bf8e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8510a485-0606-413b-913c-133c3045bf8e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.881126 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-utilities\") pod \"redhat-marketplace-rhhb6\" (UID: \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\") " pod="openshift-marketplace/redhat-marketplace-rhhb6" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.881151 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8510a485-0606-413b-913c-133c3045bf8e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8510a485-0606-413b-913c-133c3045bf8e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.881170 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-catalog-content\") pod \"redhat-marketplace-rhhb6\" (UID: \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\") " pod="openshift-marketplace/redhat-marketplace-rhhb6" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.881187 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8spn5\" (UniqueName: \"kubernetes.io/projected/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-kube-api-access-8spn5\") pod \"redhat-marketplace-rhhb6\" (UID: \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\") " pod="openshift-marketplace/redhat-marketplace-rhhb6" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.982389 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8510a485-0606-413b-913c-133c3045bf8e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8510a485-0606-413b-913c-133c3045bf8e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.982465 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-utilities\") pod \"redhat-marketplace-rhhb6\" (UID: \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\") " pod="openshift-marketplace/redhat-marketplace-rhhb6" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.982490 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8510a485-0606-413b-913c-133c3045bf8e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8510a485-0606-413b-913c-133c3045bf8e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.982520 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-catalog-content\") pod \"redhat-marketplace-rhhb6\" (UID: \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\") " pod="openshift-marketplace/redhat-marketplace-rhhb6" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.982540 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8spn5\" (UniqueName: \"kubernetes.io/projected/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-kube-api-access-8spn5\") pod \"redhat-marketplace-rhhb6\" (UID: \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\") " pod="openshift-marketplace/redhat-marketplace-rhhb6" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.982996 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8510a485-0606-413b-913c-133c3045bf8e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8510a485-0606-413b-913c-133c3045bf8e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.983526 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-catalog-content\") pod \"redhat-marketplace-rhhb6\" (UID: \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\") " pod="openshift-marketplace/redhat-marketplace-rhhb6" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.983520 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-utilities\") pod \"redhat-marketplace-rhhb6\" (UID: \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\") " pod="openshift-marketplace/redhat-marketplace-rhhb6" Dec 11 18:03:09 crc kubenswrapper[4877]: I1211 18:03:09.995470 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-89rk4"] Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.008269 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fwxl" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.023332 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8spn5\" (UniqueName: \"kubernetes.io/projected/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-kube-api-access-8spn5\") pod \"redhat-marketplace-rhhb6\" (UID: \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\") " pod="openshift-marketplace/redhat-marketplace-rhhb6" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.032965 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8510a485-0606-413b-913c-133c3045bf8e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8510a485-0606-413b-913c-133c3045bf8e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.051122 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhhb6" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.092555 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.279368 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xkm4z"] Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.280548 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xkm4z" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.284368 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.313748 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xkm4z"] Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.392234 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0240e5d9-27fa-42fe-8cab-63a80897677e-utilities\") pod \"redhat-operators-xkm4z\" (UID: \"0240e5d9-27fa-42fe-8cab-63a80897677e\") " pod="openshift-marketplace/redhat-operators-xkm4z" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.392287 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0240e5d9-27fa-42fe-8cab-63a80897677e-catalog-content\") pod \"redhat-operators-xkm4z\" (UID: \"0240e5d9-27fa-42fe-8cab-63a80897677e\") " pod="openshift-marketplace/redhat-operators-xkm4z" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.392559 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktpcs\" (UniqueName: \"kubernetes.io/projected/0240e5d9-27fa-42fe-8cab-63a80897677e-kube-api-access-ktpcs\") pod \"redhat-operators-xkm4z\" (UID: \"0240e5d9-27fa-42fe-8cab-63a80897677e\") " pod="openshift-marketplace/redhat-operators-xkm4z" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.400691 4877 generic.go:334] "Generic (PLEG): container finished" podID="5400698a-74be-440a-9e76-ae18bc00d85b" containerID="50299776091dbe50e5fe02fda2d93a75d099ad2f904d2c0a7c317c9e1de12fbf" exitCode=0 Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.400819 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rstw" event={"ID":"5400698a-74be-440a-9e76-ae18bc00d85b","Type":"ContainerDied","Data":"50299776091dbe50e5fe02fda2d93a75d099ad2f904d2c0a7c317c9e1de12fbf"} Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.400875 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rstw" event={"ID":"5400698a-74be-440a-9e76-ae18bc00d85b","Type":"ContainerStarted","Data":"10fd2501183029889704f685a7fb7f0c0bc1d4ce500e3aa8c86d5dcddd5029bb"} Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.405478 4877 generic.go:334] "Generic (PLEG): container finished" podID="d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" containerID="b209992be05894e18b746508f8b9893b155758edee63613290ebdcda01f928c3" exitCode=0 Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.405629 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq2vm" event={"ID":"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc","Type":"ContainerDied","Data":"b209992be05894e18b746508f8b9893b155758edee63613290ebdcda01f928c3"} Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.405830 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq2vm" event={"ID":"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc","Type":"ContainerStarted","Data":"7be2b2e48b9662450fb54300afab0abe66b9927640459eaa75105832d93afc16"} Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.413614 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" event={"ID":"0fff3932-5b5f-49af-a652-9030dd8f6139","Type":"ContainerStarted","Data":"5d642c7aec73ab6c8a252e75b4e09163a238a19104336a6ea9f07d99d5efad7b"} Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.432644 4877 generic.go:334] "Generic (PLEG): container finished" podID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" containerID="372f930116a6abf8ee14b666ad7499a82306bbdf49bacb074c5545f492c07d5a" exitCode=0 Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.432732 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxf6j" event={"ID":"14091be0-96bc-40cf-ae0e-ba1a5de910ee","Type":"ContainerDied","Data":"372f930116a6abf8ee14b666ad7499a82306bbdf49bacb074c5545f492c07d5a"} Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.434447 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c188a9a5-5f06-4775-b907-9d9a09716c3a","Type":"ContainerStarted","Data":"b5fcfe26d45adfed81f89b42393f660de5bf2457fb73792399f046669dce4973"} Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.497189 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0240e5d9-27fa-42fe-8cab-63a80897677e-utilities\") pod \"redhat-operators-xkm4z\" (UID: \"0240e5d9-27fa-42fe-8cab-63a80897677e\") " pod="openshift-marketplace/redhat-operators-xkm4z" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.497268 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0240e5d9-27fa-42fe-8cab-63a80897677e-catalog-content\") pod \"redhat-operators-xkm4z\" (UID: \"0240e5d9-27fa-42fe-8cab-63a80897677e\") " pod="openshift-marketplace/redhat-operators-xkm4z" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.497423 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktpcs\" (UniqueName: \"kubernetes.io/projected/0240e5d9-27fa-42fe-8cab-63a80897677e-kube-api-access-ktpcs\") pod \"redhat-operators-xkm4z\" (UID: \"0240e5d9-27fa-42fe-8cab-63a80897677e\") " pod="openshift-marketplace/redhat-operators-xkm4z" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.498274 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0240e5d9-27fa-42fe-8cab-63a80897677e-catalog-content\") pod \"redhat-operators-xkm4z\" (UID: \"0240e5d9-27fa-42fe-8cab-63a80897677e\") " pod="openshift-marketplace/redhat-operators-xkm4z" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.522833 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0240e5d9-27fa-42fe-8cab-63a80897677e-utilities\") pod \"redhat-operators-xkm4z\" (UID: \"0240e5d9-27fa-42fe-8cab-63a80897677e\") " pod="openshift-marketplace/redhat-operators-xkm4z" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.530140 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktpcs\" (UniqueName: \"kubernetes.io/projected/0240e5d9-27fa-42fe-8cab-63a80897677e-kube-api-access-ktpcs\") pod \"redhat-operators-xkm4z\" (UID: \"0240e5d9-27fa-42fe-8cab-63a80897677e\") " pod="openshift-marketplace/redhat-operators-xkm4z" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.537052 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:10 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:10 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:10 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.537109 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.576090 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fwxl"] Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.624213 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhhb6"] Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.706772 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m7zms"] Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.708178 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m7zms"] Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.708337 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m7zms" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.771113 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xkm4z" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.792169 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.804760 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-catalog-content\") pod \"redhat-operators-m7zms\" (UID: \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\") " pod="openshift-marketplace/redhat-operators-m7zms" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.804884 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-utilities\") pod \"redhat-operators-m7zms\" (UID: \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\") " pod="openshift-marketplace/redhat-operators-m7zms" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.805352 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zngg\" (UniqueName: \"kubernetes.io/projected/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-kube-api-access-4zngg\") pod \"redhat-operators-m7zms\" (UID: \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\") " pod="openshift-marketplace/redhat-operators-m7zms" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.907884 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-utilities\") pod \"redhat-operators-m7zms\" (UID: \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\") " pod="openshift-marketplace/redhat-operators-m7zms" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.907956 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zngg\" (UniqueName: \"kubernetes.io/projected/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-kube-api-access-4zngg\") pod \"redhat-operators-m7zms\" (UID: \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\") " pod="openshift-marketplace/redhat-operators-m7zms" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.908068 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-catalog-content\") pod \"redhat-operators-m7zms\" (UID: \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\") " pod="openshift-marketplace/redhat-operators-m7zms" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.910404 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-utilities\") pod \"redhat-operators-m7zms\" (UID: \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\") " pod="openshift-marketplace/redhat-operators-m7zms" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.910881 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-catalog-content\") pod \"redhat-operators-m7zms\" (UID: \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\") " pod="openshift-marketplace/redhat-operators-m7zms" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.945347 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zngg\" (UniqueName: \"kubernetes.io/projected/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-kube-api-access-4zngg\") pod \"redhat-operators-m7zms\" (UID: \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\") " pod="openshift-marketplace/redhat-operators-m7zms" Dec 11 18:03:10 crc kubenswrapper[4877]: I1211 18:03:10.972332 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m7zms" Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.297032 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xkm4z"] Dec 11 18:03:11 crc kubenswrapper[4877]: W1211 18:03:11.438810 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0240e5d9_27fa_42fe_8cab_63a80897677e.slice/crio-d486941a07b0992e46b6996848cc9a3034a1112f191b4f15221cbe009b40c2fc WatchSource:0}: Error finding container d486941a07b0992e46b6996848cc9a3034a1112f191b4f15221cbe009b40c2fc: Status 404 returned error can't find the container with id d486941a07b0992e46b6996848cc9a3034a1112f191b4f15221cbe009b40c2fc Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.484247 4877 generic.go:334] "Generic (PLEG): container finished" podID="a839141a-cee3-4d0c-bfda-c1fb36ee04fd" containerID="d2bc79b5ed0c1a395527fad1a24824ae9167d6bbd43c88f1da3c8fabe0fbd6f4" exitCode=0 Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.484356 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhhb6" event={"ID":"a839141a-cee3-4d0c-bfda-c1fb36ee04fd","Type":"ContainerDied","Data":"d2bc79b5ed0c1a395527fad1a24824ae9167d6bbd43c88f1da3c8fabe0fbd6f4"} Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.484415 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhhb6" event={"ID":"a839141a-cee3-4d0c-bfda-c1fb36ee04fd","Type":"ContainerStarted","Data":"57a8b912feeb3c55b25973d7b2e76ee40d1adc0ee5655dc8ae001eb41998f72b"} Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.488950 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8510a485-0606-413b-913c-133c3045bf8e","Type":"ContainerStarted","Data":"f92ab5b5b1d90cc0ead3150b32e20b05b2e1f4566f81f9da32cfc219efc7f6e1"} Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.508126 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" event={"ID":"0fff3932-5b5f-49af-a652-9030dd8f6139","Type":"ContainerStarted","Data":"d454fa03cf296c4f9ba280daf06707a9d91965c41ed1cb0ff979e7e1833b6917"} Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.508693 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.514840 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c188a9a5-5f06-4775-b907-9d9a09716c3a","Type":"ContainerStarted","Data":"a359d8a271d40291ec3067367884d55c8a6f90b3fe87549b7297656585c73586"} Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.519170 4877 generic.go:334] "Generic (PLEG): container finished" podID="6c00914d-bacb-4706-9ca8-8897c4c0a544" containerID="bc58a2b1e97deac1b00f40e04e48d762a557afcb1a858c5c8d322979d04d8d4c" exitCode=0 Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.519219 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fwxl" event={"ID":"6c00914d-bacb-4706-9ca8-8897c4c0a544","Type":"ContainerDied","Data":"bc58a2b1e97deac1b00f40e04e48d762a557afcb1a858c5c8d322979d04d8d4c"} Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.519246 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fwxl" event={"ID":"6c00914d-bacb-4706-9ca8-8897c4c0a544","Type":"ContainerStarted","Data":"e733cf8b2e093e65e03f07ace9f1c31753b9fd6709e7d870929cd61127c601ef"} Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.532039 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:11 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:11 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:11 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.532134 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.546618 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" podStartSLOduration=131.546582322 podStartE2EDuration="2m11.546582322s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:11.539636767 +0000 UTC m=+152.565880821" watchObservedRunningTime="2025-12-11 18:03:11.546582322 +0000 UTC m=+152.572826376" Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.565957 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m7zms"] Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.603573 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.603543891 podStartE2EDuration="3.603543891s" podCreationTimestamp="2025-12-11 18:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:11.601064185 +0000 UTC m=+152.627308229" watchObservedRunningTime="2025-12-11 18:03:11.603543891 +0000 UTC m=+152.629787925" Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.661764 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:03:11 crc kubenswrapper[4877]: I1211 18:03:11.667494 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7ffdw" Dec 11 18:03:12 crc kubenswrapper[4877]: I1211 18:03:12.527237 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:12 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:12 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:12 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:12 crc kubenswrapper[4877]: I1211 18:03:12.527693 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:12 crc kubenswrapper[4877]: I1211 18:03:12.532128 4877 generic.go:334] "Generic (PLEG): container finished" podID="2b21482e-04d8-493d-a274-670ce4961923" containerID="150baca428c2169e9173e557e7a5bd0ec016443dd6b0aa88006b1055e7af5ea3" exitCode=0 Dec 11 18:03:12 crc kubenswrapper[4877]: I1211 18:03:12.532205 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" event={"ID":"2b21482e-04d8-493d-a274-670ce4961923","Type":"ContainerDied","Data":"150baca428c2169e9173e557e7a5bd0ec016443dd6b0aa88006b1055e7af5ea3"} Dec 11 18:03:12 crc kubenswrapper[4877]: I1211 18:03:12.536905 4877 generic.go:334] "Generic (PLEG): container finished" podID="c188a9a5-5f06-4775-b907-9d9a09716c3a" containerID="a359d8a271d40291ec3067367884d55c8a6f90b3fe87549b7297656585c73586" exitCode=0 Dec 11 18:03:12 crc kubenswrapper[4877]: I1211 18:03:12.537029 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c188a9a5-5f06-4775-b907-9d9a09716c3a","Type":"ContainerDied","Data":"a359d8a271d40291ec3067367884d55c8a6f90b3fe87549b7297656585c73586"} Dec 11 18:03:12 crc kubenswrapper[4877]: I1211 18:03:12.538819 4877 generic.go:334] "Generic (PLEG): container finished" podID="0240e5d9-27fa-42fe-8cab-63a80897677e" containerID="0b9edceaa0952b798bc31c5715e0efb305e090709c19cb1dd3d6cdbeda09c27a" exitCode=0 Dec 11 18:03:12 crc kubenswrapper[4877]: I1211 18:03:12.538876 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkm4z" event={"ID":"0240e5d9-27fa-42fe-8cab-63a80897677e","Type":"ContainerDied","Data":"0b9edceaa0952b798bc31c5715e0efb305e090709c19cb1dd3d6cdbeda09c27a"} Dec 11 18:03:12 crc kubenswrapper[4877]: I1211 18:03:12.538898 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkm4z" event={"ID":"0240e5d9-27fa-42fe-8cab-63a80897677e","Type":"ContainerStarted","Data":"d486941a07b0992e46b6996848cc9a3034a1112f191b4f15221cbe009b40c2fc"} Dec 11 18:03:12 crc kubenswrapper[4877]: I1211 18:03:12.542436 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8510a485-0606-413b-913c-133c3045bf8e","Type":"ContainerStarted","Data":"e6966ecda3bcc73f455020c6a80c8f7c7229b5daf7bc23172e0e5e784cf2b36b"} Dec 11 18:03:12 crc kubenswrapper[4877]: I1211 18:03:12.548043 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7zms" event={"ID":"f66bb5cf-8e84-4f07-bfc0-358fee62eda4","Type":"ContainerStarted","Data":"d252fc6f3bf8204929c493eed937350270b019751e5b15c5f4daf0634bba9dc4"} Dec 11 18:03:12 crc kubenswrapper[4877]: I1211 18:03:12.548245 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7zms" event={"ID":"f66bb5cf-8e84-4f07-bfc0-358fee62eda4","Type":"ContainerStarted","Data":"2cc00e21e5e36f62475fcc2081a9ec709aa41a625a676f0bad087b6aebd68cb3"} Dec 11 18:03:12 crc kubenswrapper[4877]: I1211 18:03:12.597579 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.597557375 podStartE2EDuration="3.597557375s" podCreationTimestamp="2025-12-11 18:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:03:12.596458445 +0000 UTC m=+153.622702499" watchObservedRunningTime="2025-12-11 18:03:12.597557375 +0000 UTC m=+153.623801419" Dec 11 18:03:13 crc kubenswrapper[4877]: I1211 18:03:13.534733 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:13 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:13 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:13 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:13 crc kubenswrapper[4877]: I1211 18:03:13.536567 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:13 crc kubenswrapper[4877]: I1211 18:03:13.566563 4877 generic.go:334] "Generic (PLEG): container finished" podID="8510a485-0606-413b-913c-133c3045bf8e" containerID="e6966ecda3bcc73f455020c6a80c8f7c7229b5daf7bc23172e0e5e784cf2b36b" exitCode=0 Dec 11 18:03:13 crc kubenswrapper[4877]: I1211 18:03:13.566731 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8510a485-0606-413b-913c-133c3045bf8e","Type":"ContainerDied","Data":"e6966ecda3bcc73f455020c6a80c8f7c7229b5daf7bc23172e0e5e784cf2b36b"} Dec 11 18:03:13 crc kubenswrapper[4877]: I1211 18:03:13.570607 4877 generic.go:334] "Generic (PLEG): container finished" podID="f66bb5cf-8e84-4f07-bfc0-358fee62eda4" containerID="d252fc6f3bf8204929c493eed937350270b019751e5b15c5f4daf0634bba9dc4" exitCode=0 Dec 11 18:03:13 crc kubenswrapper[4877]: I1211 18:03:13.571457 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7zms" event={"ID":"f66bb5cf-8e84-4f07-bfc0-358fee62eda4","Type":"ContainerDied","Data":"d252fc6f3bf8204929c493eed937350270b019751e5b15c5f4daf0634bba9dc4"} Dec 11 18:03:13 crc kubenswrapper[4877]: I1211 18:03:13.730107 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-vrbhw" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.069893 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.141254 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.239097 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b21482e-04d8-493d-a274-670ce4961923-secret-volume\") pod \"2b21482e-04d8-493d-a274-670ce4961923\" (UID: \"2b21482e-04d8-493d-a274-670ce4961923\") " Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.239173 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d75bx\" (UniqueName: \"kubernetes.io/projected/2b21482e-04d8-493d-a274-670ce4961923-kube-api-access-d75bx\") pod \"2b21482e-04d8-493d-a274-670ce4961923\" (UID: \"2b21482e-04d8-493d-a274-670ce4961923\") " Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.239205 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b21482e-04d8-493d-a274-670ce4961923-config-volume\") pod \"2b21482e-04d8-493d-a274-670ce4961923\" (UID: \"2b21482e-04d8-493d-a274-670ce4961923\") " Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.239278 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c188a9a5-5f06-4775-b907-9d9a09716c3a-kube-api-access\") pod \"c188a9a5-5f06-4775-b907-9d9a09716c3a\" (UID: \"c188a9a5-5f06-4775-b907-9d9a09716c3a\") " Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.239310 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c188a9a5-5f06-4775-b907-9d9a09716c3a-kubelet-dir\") pod \"c188a9a5-5f06-4775-b907-9d9a09716c3a\" (UID: \"c188a9a5-5f06-4775-b907-9d9a09716c3a\") " Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.239728 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c188a9a5-5f06-4775-b907-9d9a09716c3a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c188a9a5-5f06-4775-b907-9d9a09716c3a" (UID: "c188a9a5-5f06-4775-b907-9d9a09716c3a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.241214 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b21482e-04d8-493d-a274-670ce4961923-config-volume" (OuterVolumeSpecName: "config-volume") pod "2b21482e-04d8-493d-a274-670ce4961923" (UID: "2b21482e-04d8-493d-a274-670ce4961923"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.250260 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b21482e-04d8-493d-a274-670ce4961923-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2b21482e-04d8-493d-a274-670ce4961923" (UID: "2b21482e-04d8-493d-a274-670ce4961923"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.253783 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b21482e-04d8-493d-a274-670ce4961923-kube-api-access-d75bx" (OuterVolumeSpecName: "kube-api-access-d75bx") pod "2b21482e-04d8-493d-a274-670ce4961923" (UID: "2b21482e-04d8-493d-a274-670ce4961923"). InnerVolumeSpecName "kube-api-access-d75bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.255143 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c188a9a5-5f06-4775-b907-9d9a09716c3a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c188a9a5-5f06-4775-b907-9d9a09716c3a" (UID: "c188a9a5-5f06-4775-b907-9d9a09716c3a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.341006 4877 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b21482e-04d8-493d-a274-670ce4961923-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.341042 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d75bx\" (UniqueName: \"kubernetes.io/projected/2b21482e-04d8-493d-a274-670ce4961923-kube-api-access-d75bx\") on node \"crc\" DevicePath \"\"" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.341054 4877 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b21482e-04d8-493d-a274-670ce4961923-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.341065 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c188a9a5-5f06-4775-b907-9d9a09716c3a-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.341074 4877 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c188a9a5-5f06-4775-b907-9d9a09716c3a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.527862 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:14 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:14 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:14 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.527979 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.592537 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" event={"ID":"2b21482e-04d8-493d-a274-670ce4961923","Type":"ContainerDied","Data":"3643c4f7cd82bbd90f907290323f4cf0a8e774f131ba026dc33c5a89db0828b1"} Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.592584 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3643c4f7cd82bbd90f907290323f4cf0a8e774f131ba026dc33c5a89db0828b1" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.592644 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.611986 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.621519 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c188a9a5-5f06-4775-b907-9d9a09716c3a","Type":"ContainerDied","Data":"b5fcfe26d45adfed81f89b42393f660de5bf2457fb73792399f046669dce4973"} Dec 11 18:03:14 crc kubenswrapper[4877]: I1211 18:03:14.621587 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5fcfe26d45adfed81f89b42393f660de5bf2457fb73792399f046669dce4973" Dec 11 18:03:15 crc kubenswrapper[4877]: I1211 18:03:15.089930 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 18:03:15 crc kubenswrapper[4877]: I1211 18:03:15.255331 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8510a485-0606-413b-913c-133c3045bf8e-kube-api-access\") pod \"8510a485-0606-413b-913c-133c3045bf8e\" (UID: \"8510a485-0606-413b-913c-133c3045bf8e\") " Dec 11 18:03:15 crc kubenswrapper[4877]: I1211 18:03:15.256167 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8510a485-0606-413b-913c-133c3045bf8e-kubelet-dir\") pod \"8510a485-0606-413b-913c-133c3045bf8e\" (UID: \"8510a485-0606-413b-913c-133c3045bf8e\") " Dec 11 18:03:15 crc kubenswrapper[4877]: I1211 18:03:15.256252 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8510a485-0606-413b-913c-133c3045bf8e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8510a485-0606-413b-913c-133c3045bf8e" (UID: "8510a485-0606-413b-913c-133c3045bf8e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:03:15 crc kubenswrapper[4877]: I1211 18:03:15.256884 4877 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8510a485-0606-413b-913c-133c3045bf8e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 18:03:15 crc kubenswrapper[4877]: I1211 18:03:15.276521 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8510a485-0606-413b-913c-133c3045bf8e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8510a485-0606-413b-913c-133c3045bf8e" (UID: "8510a485-0606-413b-913c-133c3045bf8e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:03:15 crc kubenswrapper[4877]: I1211 18:03:15.358329 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8510a485-0606-413b-913c-133c3045bf8e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 18:03:15 crc kubenswrapper[4877]: I1211 18:03:15.527362 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:15 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:15 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:15 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:15 crc kubenswrapper[4877]: I1211 18:03:15.527477 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:15 crc kubenswrapper[4877]: I1211 18:03:15.674179 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8510a485-0606-413b-913c-133c3045bf8e","Type":"ContainerDied","Data":"f92ab5b5b1d90cc0ead3150b32e20b05b2e1f4566f81f9da32cfc219efc7f6e1"} Dec 11 18:03:15 crc kubenswrapper[4877]: I1211 18:03:15.674229 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f92ab5b5b1d90cc0ead3150b32e20b05b2e1f4566f81f9da32cfc219efc7f6e1" Dec 11 18:03:15 crc kubenswrapper[4877]: I1211 18:03:15.674324 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 18:03:16 crc kubenswrapper[4877]: I1211 18:03:16.527537 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:16 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:16 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:16 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:16 crc kubenswrapper[4877]: I1211 18:03:16.528299 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:16 crc kubenswrapper[4877]: I1211 18:03:16.638544 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:03:16 crc kubenswrapper[4877]: I1211 18:03:16.638648 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:03:17 crc kubenswrapper[4877]: I1211 18:03:17.526602 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:17 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:17 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:17 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:17 crc kubenswrapper[4877]: I1211 18:03:17.526701 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:18 crc kubenswrapper[4877]: I1211 18:03:18.120403 4877 patch_prober.go:28] interesting pod/console-f9d7485db-7bj9b container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 11 18:03:18 crc kubenswrapper[4877]: I1211 18:03:18.120483 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-7bj9b" podUID="88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 11 18:03:18 crc kubenswrapper[4877]: I1211 18:03:18.139838 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:03:18 crc kubenswrapper[4877]: I1211 18:03:18.139855 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:03:18 crc kubenswrapper[4877]: I1211 18:03:18.139993 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:03:18 crc kubenswrapper[4877]: I1211 18:03:18.139929 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:03:18 crc kubenswrapper[4877]: I1211 18:03:18.525414 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:18 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:18 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:18 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:18 crc kubenswrapper[4877]: I1211 18:03:18.525967 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:19 crc kubenswrapper[4877]: I1211 18:03:19.526840 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:19 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:19 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:19 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:19 crc kubenswrapper[4877]: I1211 18:03:19.526922 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:20 crc kubenswrapper[4877]: I1211 18:03:20.525790 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:20 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:20 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:20 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:20 crc kubenswrapper[4877]: I1211 18:03:20.525886 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:21 crc kubenswrapper[4877]: I1211 18:03:21.528649 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:21 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:21 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:21 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:21 crc kubenswrapper[4877]: I1211 18:03:21.529202 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:22 crc kubenswrapper[4877]: I1211 18:03:22.544638 4877 patch_prober.go:28] interesting pod/router-default-5444994796-mdpb8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 18:03:22 crc kubenswrapper[4877]: [-]has-synced failed: reason withheld Dec 11 18:03:22 crc kubenswrapper[4877]: [+]process-running ok Dec 11 18:03:22 crc kubenswrapper[4877]: healthz check failed Dec 11 18:03:22 crc kubenswrapper[4877]: I1211 18:03:22.544727 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mdpb8" podUID="63141648-7cad-41e1-96e7-38c0305347b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 18:03:22 crc kubenswrapper[4877]: I1211 18:03:22.665079 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs\") pod \"network-metrics-daemon-sn9xv\" (UID: \"fa0b7b99-8d0a-48ad-9f98-da5947644472\") " pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:03:22 crc kubenswrapper[4877]: I1211 18:03:22.675554 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa0b7b99-8d0a-48ad-9f98-da5947644472-metrics-certs\") pod \"network-metrics-daemon-sn9xv\" (UID: \"fa0b7b99-8d0a-48ad-9f98-da5947644472\") " pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:03:22 crc kubenswrapper[4877]: I1211 18:03:22.842115 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sn9xv" Dec 11 18:03:23 crc kubenswrapper[4877]: I1211 18:03:23.543584 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:03:23 crc kubenswrapper[4877]: I1211 18:03:23.571156 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-mdpb8" Dec 11 18:03:28 crc kubenswrapper[4877]: I1211 18:03:28.123027 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:03:28 crc kubenswrapper[4877]: I1211 18:03:28.125849 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:03:28 crc kubenswrapper[4877]: I1211 18:03:28.125947 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:03:28 crc kubenswrapper[4877]: I1211 18:03:28.126083 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:03:28 crc kubenswrapper[4877]: I1211 18:03:28.126168 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:03:28 crc kubenswrapper[4877]: I1211 18:03:28.126200 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-cvvxz" Dec 11 18:03:28 crc kubenswrapper[4877]: I1211 18:03:28.127012 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:03:28 crc kubenswrapper[4877]: I1211 18:03:28.127124 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:03:28 crc kubenswrapper[4877]: I1211 18:03:28.127353 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"2389e0b76e148d53e9e36619bcab48cdb967f6c036f3f5efa19028dc31811df7"} pod="openshift-console/downloads-7954f5f757-cvvxz" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 11 18:03:28 crc kubenswrapper[4877]: I1211 18:03:28.127558 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" containerID="cri-o://2389e0b76e148d53e9e36619bcab48cdb967f6c036f3f5efa19028dc31811df7" gracePeriod=2 Dec 11 18:03:28 crc kubenswrapper[4877]: I1211 18:03:28.127929 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:03:29 crc kubenswrapper[4877]: I1211 18:03:29.240415 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:03:29 crc kubenswrapper[4877]: I1211 18:03:29.905747 4877 generic.go:334] "Generic (PLEG): container finished" podID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerID="2389e0b76e148d53e9e36619bcab48cdb967f6c036f3f5efa19028dc31811df7" exitCode=0 Dec 11 18:03:29 crc kubenswrapper[4877]: I1211 18:03:29.906013 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cvvxz" event={"ID":"1cae51ed-b80c-4017-9b9f-1485a809f145","Type":"ContainerDied","Data":"2389e0b76e148d53e9e36619bcab48cdb967f6c036f3f5efa19028dc31811df7"} Dec 11 18:03:38 crc kubenswrapper[4877]: I1211 18:03:38.127560 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:03:38 crc kubenswrapper[4877]: I1211 18:03:38.128153 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:03:38 crc kubenswrapper[4877]: I1211 18:03:38.284466 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rglc" Dec 11 18:03:40 crc kubenswrapper[4877]: I1211 18:03:40.243180 4877 trace.go:236] Trace[452685991]: "Calculate volume metrics of service-ca-bundle for pod openshift-ingress/router-default-5444994796-mdpb8" (11-Dec-2025 18:03:39.168) (total time: 1074ms): Dec 11 18:03:40 crc kubenswrapper[4877]: Trace[452685991]: [1.07446407s] [1.07446407s] END Dec 11 18:03:40 crc kubenswrapper[4877]: I1211 18:03:40.243451 4877 trace.go:236] Trace[499608014]: "Calculate volume metrics of auth-proxy-config for pod openshift-cluster-machine-approver/machine-approver-56656f9798-vxx99" (11-Dec-2025 18:03:39.166) (total time: 1076ms): Dec 11 18:03:40 crc kubenswrapper[4877]: Trace[499608014]: [1.076656178s] [1.076656178s] END Dec 11 18:03:40 crc kubenswrapper[4877]: I1211 18:03:40.244442 4877 trace.go:236] Trace[1217300108]: "Calculate volume metrics of v4-0-config-system-trusted-ca-bundle for pod openshift-authentication/oauth-openshift-558db77b4-kqnqb" (11-Dec-2025 18:03:39.169) (total time: 1074ms): Dec 11 18:03:40 crc kubenswrapper[4877]: Trace[1217300108]: [1.074642305s] [1.074642305s] END Dec 11 18:03:46 crc kubenswrapper[4877]: I1211 18:03:46.637644 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:03:46 crc kubenswrapper[4877]: I1211 18:03:46.638211 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.170709 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.668815 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 18:03:47 crc kubenswrapper[4877]: E1211 18:03:47.669117 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b21482e-04d8-493d-a274-670ce4961923" containerName="collect-profiles" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.669132 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b21482e-04d8-493d-a274-670ce4961923" containerName="collect-profiles" Dec 11 18:03:47 crc kubenswrapper[4877]: E1211 18:03:47.669153 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c188a9a5-5f06-4775-b907-9d9a09716c3a" containerName="pruner" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.669160 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="c188a9a5-5f06-4775-b907-9d9a09716c3a" containerName="pruner" Dec 11 18:03:47 crc kubenswrapper[4877]: E1211 18:03:47.669175 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8510a485-0606-413b-913c-133c3045bf8e" containerName="pruner" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.669181 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="8510a485-0606-413b-913c-133c3045bf8e" containerName="pruner" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.669532 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b21482e-04d8-493d-a274-670ce4961923" containerName="collect-profiles" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.669550 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="8510a485-0606-413b-913c-133c3045bf8e" containerName="pruner" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.669563 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="c188a9a5-5f06-4775-b907-9d9a09716c3a" containerName="pruner" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.670056 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.674182 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.674627 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.682999 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.797715 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.798296 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.900195 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.900307 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.900452 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.926147 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 18:03:47 crc kubenswrapper[4877]: I1211 18:03:47.989781 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 18:03:48 crc kubenswrapper[4877]: I1211 18:03:48.127737 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:03:48 crc kubenswrapper[4877]: I1211 18:03:48.127827 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:03:53 crc kubenswrapper[4877]: I1211 18:03:53.073772 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 18:03:53 crc kubenswrapper[4877]: I1211 18:03:53.078966 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 18:03:53 crc kubenswrapper[4877]: I1211 18:03:53.079883 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 18:03:53 crc kubenswrapper[4877]: I1211 18:03:53.181675 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5508650-8a33-447f-bc52-87e7532200d7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e5508650-8a33-447f-bc52-87e7532200d7\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 18:03:53 crc kubenswrapper[4877]: I1211 18:03:53.182451 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5508650-8a33-447f-bc52-87e7532200d7-kube-api-access\") pod \"installer-9-crc\" (UID: \"e5508650-8a33-447f-bc52-87e7532200d7\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 18:03:53 crc kubenswrapper[4877]: I1211 18:03:53.182663 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5508650-8a33-447f-bc52-87e7532200d7-var-lock\") pod \"installer-9-crc\" (UID: \"e5508650-8a33-447f-bc52-87e7532200d7\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 18:03:53 crc kubenswrapper[4877]: I1211 18:03:53.284667 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5508650-8a33-447f-bc52-87e7532200d7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e5508650-8a33-447f-bc52-87e7532200d7\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 18:03:53 crc kubenswrapper[4877]: I1211 18:03:53.284755 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5508650-8a33-447f-bc52-87e7532200d7-kube-api-access\") pod \"installer-9-crc\" (UID: \"e5508650-8a33-447f-bc52-87e7532200d7\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 18:03:53 crc kubenswrapper[4877]: I1211 18:03:53.284801 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5508650-8a33-447f-bc52-87e7532200d7-var-lock\") pod \"installer-9-crc\" (UID: \"e5508650-8a33-447f-bc52-87e7532200d7\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 18:03:53 crc kubenswrapper[4877]: I1211 18:03:53.284885 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5508650-8a33-447f-bc52-87e7532200d7-var-lock\") pod \"installer-9-crc\" (UID: \"e5508650-8a33-447f-bc52-87e7532200d7\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 18:03:53 crc kubenswrapper[4877]: I1211 18:03:53.284932 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5508650-8a33-447f-bc52-87e7532200d7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e5508650-8a33-447f-bc52-87e7532200d7\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 18:03:53 crc kubenswrapper[4877]: I1211 18:03:53.306142 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5508650-8a33-447f-bc52-87e7532200d7-kube-api-access\") pod \"installer-9-crc\" (UID: \"e5508650-8a33-447f-bc52-87e7532200d7\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 18:03:53 crc kubenswrapper[4877]: I1211 18:03:53.409101 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 18:03:56 crc kubenswrapper[4877]: E1211 18:03:56.832699 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 11 18:03:56 crc kubenswrapper[4877]: E1211 18:03:56.833904 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7qpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cxf6j_openshift-marketplace(14091be0-96bc-40cf-ae0e-ba1a5de910ee): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 18:03:56 crc kubenswrapper[4877]: E1211 18:03:56.835077 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cxf6j" podUID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" Dec 11 18:03:58 crc kubenswrapper[4877]: I1211 18:03:58.125797 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:03:58 crc kubenswrapper[4877]: I1211 18:03:58.126291 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:03:58 crc kubenswrapper[4877]: E1211 18:03:58.158840 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cxf6j" podUID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" Dec 11 18:03:58 crc kubenswrapper[4877]: E1211 18:03:58.257704 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 11 18:03:58 crc kubenswrapper[4877]: E1211 18:03:58.257936 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6khs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2rstw_openshift-marketplace(5400698a-74be-440a-9e76-ae18bc00d85b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 18:03:58 crc kubenswrapper[4877]: E1211 18:03:58.259538 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2rstw" podUID="5400698a-74be-440a-9e76-ae18bc00d85b" Dec 11 18:03:58 crc kubenswrapper[4877]: E1211 18:03:58.277672 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 11 18:03:58 crc kubenswrapper[4877]: E1211 18:03:58.277912 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jl852,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pvh2t_openshift-marketplace(2f3a3c00-9466-4501-b1ee-12d677fc3b4e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 18:03:58 crc kubenswrapper[4877]: E1211 18:03:58.279883 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pvh2t" podUID="2f3a3c00-9466-4501-b1ee-12d677fc3b4e" Dec 11 18:03:58 crc kubenswrapper[4877]: E1211 18:03:58.304362 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 11 18:03:58 crc kubenswrapper[4877]: E1211 18:03:58.304580 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qplr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tq2vm_openshift-marketplace(d22d1fa7-6fd8-43c8-94e0-6aa0759394dc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 18:03:58 crc kubenswrapper[4877]: E1211 18:03:58.305819 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tq2vm" podUID="d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" Dec 11 18:04:01 crc kubenswrapper[4877]: E1211 18:04:01.712913 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tq2vm" podUID="d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" Dec 11 18:04:01 crc kubenswrapper[4877]: E1211 18:04:01.712994 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pvh2t" podUID="2f3a3c00-9466-4501-b1ee-12d677fc3b4e" Dec 11 18:04:01 crc kubenswrapper[4877]: E1211 18:04:01.713138 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2rstw" podUID="5400698a-74be-440a-9e76-ae18bc00d85b" Dec 11 18:04:06 crc kubenswrapper[4877]: E1211 18:04:06.527581 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 11 18:04:06 crc kubenswrapper[4877]: E1211 18:04:06.528353 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zngg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-m7zms_openshift-marketplace(f66bb5cf-8e84-4f07-bfc0-358fee62eda4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 18:04:06 crc kubenswrapper[4877]: E1211 18:04:06.527927 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 11 18:04:06 crc kubenswrapper[4877]: E1211 18:04:06.529556 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-m7zms" podUID="f66bb5cf-8e84-4f07-bfc0-358fee62eda4" Dec 11 18:04:06 crc kubenswrapper[4877]: E1211 18:04:06.529791 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktpcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xkm4z_openshift-marketplace(0240e5d9-27fa-42fe-8cab-63a80897677e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 18:04:06 crc kubenswrapper[4877]: E1211 18:04:06.534538 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xkm4z" podUID="0240e5d9-27fa-42fe-8cab-63a80897677e" Dec 11 18:04:06 crc kubenswrapper[4877]: I1211 18:04:06.936720 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sn9xv"] Dec 11 18:04:08 crc kubenswrapper[4877]: E1211 18:04:08.033694 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-m7zms" podUID="f66bb5cf-8e84-4f07-bfc0-358fee62eda4" Dec 11 18:04:08 crc kubenswrapper[4877]: E1211 18:04:08.033712 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xkm4z" podUID="0240e5d9-27fa-42fe-8cab-63a80897677e" Dec 11 18:04:08 crc kubenswrapper[4877]: E1211 18:04:08.105770 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 11 18:04:08 crc kubenswrapper[4877]: E1211 18:04:08.106078 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8spn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rhhb6_openshift-marketplace(a839141a-cee3-4d0c-bfda-c1fb36ee04fd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 18:04:08 crc kubenswrapper[4877]: E1211 18:04:08.107293 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rhhb6" podUID="a839141a-cee3-4d0c-bfda-c1fb36ee04fd" Dec 11 18:04:08 crc kubenswrapper[4877]: E1211 18:04:08.108406 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 11 18:04:08 crc kubenswrapper[4877]: E1211 18:04:08.108598 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pk6dd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-7fwxl_openshift-marketplace(6c00914d-bacb-4706-9ca8-8897c4c0a544): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 18:04:08 crc kubenswrapper[4877]: E1211 18:04:08.109712 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-7fwxl" podUID="6c00914d-bacb-4706-9ca8-8897c4c0a544" Dec 11 18:04:08 crc kubenswrapper[4877]: I1211 18:04:08.125834 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:04:08 crc kubenswrapper[4877]: I1211 18:04:08.126095 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:04:08 crc kubenswrapper[4877]: I1211 18:04:08.170876 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" event={"ID":"fa0b7b99-8d0a-48ad-9f98-da5947644472","Type":"ContainerStarted","Data":"efa7d2a4eaed19fef92d11984bb1ce8aecc8b68ab07eaa2ca0f0afdd958a57f7"} Dec 11 18:04:08 crc kubenswrapper[4877]: E1211 18:04:08.186391 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-7fwxl" podUID="6c00914d-bacb-4706-9ca8-8897c4c0a544" Dec 11 18:04:08 crc kubenswrapper[4877]: E1211 18:04:08.187146 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rhhb6" podUID="a839141a-cee3-4d0c-bfda-c1fb36ee04fd" Dec 11 18:04:08 crc kubenswrapper[4877]: I1211 18:04:08.413743 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 18:04:08 crc kubenswrapper[4877]: W1211 18:04:08.423962 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod94a9adaa_ec86_4c9e_8dfa_09d8559ae8ed.slice/crio-cda94c79b4238bc8065487adba5805abe745283d98e3f120618b44fb5ff63180 WatchSource:0}: Error finding container cda94c79b4238bc8065487adba5805abe745283d98e3f120618b44fb5ff63180: Status 404 returned error can't find the container with id cda94c79b4238bc8065487adba5805abe745283d98e3f120618b44fb5ff63180 Dec 11 18:04:08 crc kubenswrapper[4877]: I1211 18:04:08.497804 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 18:04:09 crc kubenswrapper[4877]: I1211 18:04:09.177964 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e5508650-8a33-447f-bc52-87e7532200d7","Type":"ContainerStarted","Data":"3657b83f9ce33a0fc890fe2ffa407d8ec647c4a396238055d5bad25b448de8da"} Dec 11 18:04:09 crc kubenswrapper[4877]: I1211 18:04:09.180111 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed","Type":"ContainerStarted","Data":"cda94c79b4238bc8065487adba5805abe745283d98e3f120618b44fb5ff63180"} Dec 11 18:04:09 crc kubenswrapper[4877]: I1211 18:04:09.182332 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" event={"ID":"fa0b7b99-8d0a-48ad-9f98-da5947644472","Type":"ContainerStarted","Data":"169c03049fc407f15418f0e7246e838015b7ef10911d3667a1717d17206ef606"} Dec 11 18:04:09 crc kubenswrapper[4877]: I1211 18:04:09.184837 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cvvxz" event={"ID":"1cae51ed-b80c-4017-9b9f-1485a809f145","Type":"ContainerStarted","Data":"f78b82c286f0b0538afb2b23d51fb33669ea408a8e14a4a163fd16efa512d294"} Dec 11 18:04:09 crc kubenswrapper[4877]: I1211 18:04:09.185589 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-cvvxz" Dec 11 18:04:09 crc kubenswrapper[4877]: I1211 18:04:09.185973 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:04:09 crc kubenswrapper[4877]: I1211 18:04:09.186022 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:04:10 crc kubenswrapper[4877]: I1211 18:04:10.190212 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:04:10 crc kubenswrapper[4877]: I1211 18:04:10.190773 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:04:11 crc kubenswrapper[4877]: I1211 18:04:11.229037 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed","Type":"ContainerStarted","Data":"973ecb99f2d86bc7797851270cea6735f026dca6d0ca77faa54db916d2f65063"} Dec 11 18:04:11 crc kubenswrapper[4877]: I1211 18:04:11.229166 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sn9xv" event={"ID":"fa0b7b99-8d0a-48ad-9f98-da5947644472","Type":"ContainerStarted","Data":"c2af87f0cd64306f3ea2a9ffa0d024fa3cc7acb8c9eed4dbe568fd0a01f50225"} Dec 11 18:04:11 crc kubenswrapper[4877]: I1211 18:04:11.229183 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e5508650-8a33-447f-bc52-87e7532200d7","Type":"ContainerStarted","Data":"92d9e8e55e212dc8c2c5b8daaa24845a51e48ccd585f36692a11695dff4b1deb"} Dec 11 18:04:11 crc kubenswrapper[4877]: I1211 18:04:11.242341 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=24.242280855 podStartE2EDuration="24.242280855s" podCreationTimestamp="2025-12-11 18:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:04:11.238501437 +0000 UTC m=+212.264745481" watchObservedRunningTime="2025-12-11 18:04:11.242280855 +0000 UTC m=+212.268524899" Dec 11 18:04:11 crc kubenswrapper[4877]: I1211 18:04:11.279529 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-sn9xv" podStartSLOduration=191.279502263 podStartE2EDuration="3m11.279502263s" podCreationTimestamp="2025-12-11 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:04:11.26289352 +0000 UTC m=+212.289137584" watchObservedRunningTime="2025-12-11 18:04:11.279502263 +0000 UTC m=+212.305746307" Dec 11 18:04:11 crc kubenswrapper[4877]: I1211 18:04:11.282715 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=18.282703067 podStartE2EDuration="18.282703067s" podCreationTimestamp="2025-12-11 18:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:04:11.278158212 +0000 UTC m=+212.304402256" watchObservedRunningTime="2025-12-11 18:04:11.282703067 +0000 UTC m=+212.308947111" Dec 11 18:04:12 crc kubenswrapper[4877]: I1211 18:04:12.243539 4877 generic.go:334] "Generic (PLEG): container finished" podID="94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed" containerID="973ecb99f2d86bc7797851270cea6735f026dca6d0ca77faa54db916d2f65063" exitCode=0 Dec 11 18:04:12 crc kubenswrapper[4877]: I1211 18:04:12.243657 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed","Type":"ContainerDied","Data":"973ecb99f2d86bc7797851270cea6735f026dca6d0ca77faa54db916d2f65063"} Dec 11 18:04:15 crc kubenswrapper[4877]: I1211 18:04:15.802444 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 18:04:15 crc kubenswrapper[4877]: I1211 18:04:15.942988 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed-kubelet-dir\") pod \"94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed\" (UID: \"94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed\") " Dec 11 18:04:15 crc kubenswrapper[4877]: I1211 18:04:15.943127 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed-kube-api-access\") pod \"94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed\" (UID: \"94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed\") " Dec 11 18:04:15 crc kubenswrapper[4877]: I1211 18:04:15.943159 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed" (UID: "94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:04:15 crc kubenswrapper[4877]: I1211 18:04:15.943507 4877 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:15 crc kubenswrapper[4877]: I1211 18:04:15.956292 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed" (UID: "94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:04:16 crc kubenswrapper[4877]: I1211 18:04:16.046189 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:16 crc kubenswrapper[4877]: I1211 18:04:16.286825 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed","Type":"ContainerDied","Data":"cda94c79b4238bc8065487adba5805abe745283d98e3f120618b44fb5ff63180"} Dec 11 18:04:16 crc kubenswrapper[4877]: I1211 18:04:16.286872 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cda94c79b4238bc8065487adba5805abe745283d98e3f120618b44fb5ff63180" Dec 11 18:04:16 crc kubenswrapper[4877]: I1211 18:04:16.286941 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 18:04:16 crc kubenswrapper[4877]: I1211 18:04:16.637472 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:04:16 crc kubenswrapper[4877]: I1211 18:04:16.637559 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:04:16 crc kubenswrapper[4877]: I1211 18:04:16.637632 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:04:16 crc kubenswrapper[4877]: I1211 18:04:16.638419 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 18:04:16 crc kubenswrapper[4877]: I1211 18:04:16.638530 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a" gracePeriod=600 Dec 11 18:04:17 crc kubenswrapper[4877]: I1211 18:04:17.294473 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a" exitCode=0 Dec 11 18:04:17 crc kubenswrapper[4877]: I1211 18:04:17.294557 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a"} Dec 11 18:04:17 crc kubenswrapper[4877]: I1211 18:04:17.297142 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxf6j" event={"ID":"14091be0-96bc-40cf-ae0e-ba1a5de910ee","Type":"ContainerStarted","Data":"e6a1e5d3f441d0dc8cd77222482cb3dd45ece2a3752c1ddcf697ff4681311b77"} Dec 11 18:04:18 crc kubenswrapper[4877]: I1211 18:04:18.125463 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:04:18 crc kubenswrapper[4877]: I1211 18:04:18.126000 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:04:18 crc kubenswrapper[4877]: I1211 18:04:18.125548 4877 patch_prober.go:28] interesting pod/downloads-7954f5f757-cvvxz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 11 18:04:18 crc kubenswrapper[4877]: I1211 18:04:18.126084 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cvvxz" podUID="1cae51ed-b80c-4017-9b9f-1485a809f145" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 11 18:04:18 crc kubenswrapper[4877]: I1211 18:04:18.309290 4877 generic.go:334] "Generic (PLEG): container finished" podID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" containerID="e6a1e5d3f441d0dc8cd77222482cb3dd45ece2a3752c1ddcf697ff4681311b77" exitCode=0 Dec 11 18:04:18 crc kubenswrapper[4877]: I1211 18:04:18.309389 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxf6j" event={"ID":"14091be0-96bc-40cf-ae0e-ba1a5de910ee","Type":"ContainerDied","Data":"e6a1e5d3f441d0dc8cd77222482cb3dd45ece2a3752c1ddcf697ff4681311b77"} Dec 11 18:04:18 crc kubenswrapper[4877]: I1211 18:04:18.313078 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"cf2325d1939acbcb7dd926a92a87748b56e9bd52fe77230dcd2e8429b4a7a42c"} Dec 11 18:04:23 crc kubenswrapper[4877]: I1211 18:04:23.345090 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvh2t" event={"ID":"2f3a3c00-9466-4501-b1ee-12d677fc3b4e","Type":"ContainerStarted","Data":"1636e81aaab51808757aac0a37ee4e5277b4d44aa1a0d180bcdff08643405bca"} Dec 11 18:04:23 crc kubenswrapper[4877]: I1211 18:04:23.347235 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rstw" event={"ID":"5400698a-74be-440a-9e76-ae18bc00d85b","Type":"ContainerStarted","Data":"d85c34bc0ade141bc8751d0ec4ca713717da59739d65350d5f5549f1cfaf661b"} Dec 11 18:04:23 crc kubenswrapper[4877]: I1211 18:04:23.366650 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq2vm" event={"ID":"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc","Type":"ContainerStarted","Data":"2af8b20e144e27fee63938afe07e27cafea4452d82e45f1c8dfb0bdce510cb7b"} Dec 11 18:04:24 crc kubenswrapper[4877]: I1211 18:04:24.377117 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhhb6" event={"ID":"a839141a-cee3-4d0c-bfda-c1fb36ee04fd","Type":"ContainerStarted","Data":"f08accf5fc2f7287b21a895b10ec2571a496a9d2c35fb9e3715dfb46072974c6"} Dec 11 18:04:24 crc kubenswrapper[4877]: I1211 18:04:24.419085 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkm4z" event={"ID":"0240e5d9-27fa-42fe-8cab-63a80897677e","Type":"ContainerStarted","Data":"79d41cb9ef2d580a197d1cdad924b50a228589d1a6a471d9bfff888ddcd7b13c"} Dec 11 18:04:24 crc kubenswrapper[4877]: I1211 18:04:24.423982 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fwxl" event={"ID":"6c00914d-bacb-4706-9ca8-8897c4c0a544","Type":"ContainerStarted","Data":"ee32b0264544ea0234ed569037e86aec478dc7493c7c8f5e34e3110716eba3e1"} Dec 11 18:04:25 crc kubenswrapper[4877]: I1211 18:04:25.432988 4877 generic.go:334] "Generic (PLEG): container finished" podID="d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" containerID="2af8b20e144e27fee63938afe07e27cafea4452d82e45f1c8dfb0bdce510cb7b" exitCode=0 Dec 11 18:04:25 crc kubenswrapper[4877]: I1211 18:04:25.433062 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq2vm" event={"ID":"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc","Type":"ContainerDied","Data":"2af8b20e144e27fee63938afe07e27cafea4452d82e45f1c8dfb0bdce510cb7b"} Dec 11 18:04:25 crc kubenswrapper[4877]: I1211 18:04:25.436114 4877 generic.go:334] "Generic (PLEG): container finished" podID="5400698a-74be-440a-9e76-ae18bc00d85b" containerID="d85c34bc0ade141bc8751d0ec4ca713717da59739d65350d5f5549f1cfaf661b" exitCode=0 Dec 11 18:04:25 crc kubenswrapper[4877]: I1211 18:04:25.436166 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rstw" event={"ID":"5400698a-74be-440a-9e76-ae18bc00d85b","Type":"ContainerDied","Data":"d85c34bc0ade141bc8751d0ec4ca713717da59739d65350d5f5549f1cfaf661b"} Dec 11 18:04:25 crc kubenswrapper[4877]: I1211 18:04:25.439200 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7zms" event={"ID":"f66bb5cf-8e84-4f07-bfc0-358fee62eda4","Type":"ContainerStarted","Data":"a96e9e9a09e94ee397954ec440b7ae3dd161b74adcfe29e1c5b74d3d56f8448c"} Dec 11 18:04:25 crc kubenswrapper[4877]: I1211 18:04:25.441640 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxf6j" event={"ID":"14091be0-96bc-40cf-ae0e-ba1a5de910ee","Type":"ContainerStarted","Data":"096e971d31bef72574f7a3729545644297ec762e6adba97601fde9fc73363c77"} Dec 11 18:04:25 crc kubenswrapper[4877]: I1211 18:04:25.443341 4877 generic.go:334] "Generic (PLEG): container finished" podID="2f3a3c00-9466-4501-b1ee-12d677fc3b4e" containerID="1636e81aaab51808757aac0a37ee4e5277b4d44aa1a0d180bcdff08643405bca" exitCode=0 Dec 11 18:04:25 crc kubenswrapper[4877]: I1211 18:04:25.443411 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvh2t" event={"ID":"2f3a3c00-9466-4501-b1ee-12d677fc3b4e","Type":"ContainerDied","Data":"1636e81aaab51808757aac0a37ee4e5277b4d44aa1a0d180bcdff08643405bca"} Dec 11 18:04:25 crc kubenswrapper[4877]: I1211 18:04:25.446613 4877 generic.go:334] "Generic (PLEG): container finished" podID="6c00914d-bacb-4706-9ca8-8897c4c0a544" containerID="ee32b0264544ea0234ed569037e86aec478dc7493c7c8f5e34e3110716eba3e1" exitCode=0 Dec 11 18:04:25 crc kubenswrapper[4877]: I1211 18:04:25.446691 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fwxl" event={"ID":"6c00914d-bacb-4706-9ca8-8897c4c0a544","Type":"ContainerDied","Data":"ee32b0264544ea0234ed569037e86aec478dc7493c7c8f5e34e3110716eba3e1"} Dec 11 18:04:25 crc kubenswrapper[4877]: I1211 18:04:25.450127 4877 generic.go:334] "Generic (PLEG): container finished" podID="a839141a-cee3-4d0c-bfda-c1fb36ee04fd" containerID="f08accf5fc2f7287b21a895b10ec2571a496a9d2c35fb9e3715dfb46072974c6" exitCode=0 Dec 11 18:04:25 crc kubenswrapper[4877]: I1211 18:04:25.450794 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhhb6" event={"ID":"a839141a-cee3-4d0c-bfda-c1fb36ee04fd","Type":"ContainerDied","Data":"f08accf5fc2f7287b21a895b10ec2571a496a9d2c35fb9e3715dfb46072974c6"} Dec 11 18:04:27 crc kubenswrapper[4877]: I1211 18:04:27.498308 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cxf6j" podStartSLOduration=5.665392969 podStartE2EDuration="1m20.498282968s" podCreationTimestamp="2025-12-11 18:03:07 +0000 UTC" firstStartedPulling="2025-12-11 18:03:09.271025734 +0000 UTC m=+150.297269778" lastFinishedPulling="2025-12-11 18:04:24.103915733 +0000 UTC m=+225.130159777" observedRunningTime="2025-12-11 18:04:27.493638091 +0000 UTC m=+228.519882135" watchObservedRunningTime="2025-12-11 18:04:27.498282968 +0000 UTC m=+228.524527012" Dec 11 18:04:27 crc kubenswrapper[4877]: I1211 18:04:27.561891 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:04:27 crc kubenswrapper[4877]: I1211 18:04:27.562157 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:04:28 crc kubenswrapper[4877]: I1211 18:04:28.144396 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-cvvxz" Dec 11 18:04:28 crc kubenswrapper[4877]: I1211 18:04:28.473617 4877 generic.go:334] "Generic (PLEG): container finished" podID="0240e5d9-27fa-42fe-8cab-63a80897677e" containerID="79d41cb9ef2d580a197d1cdad924b50a228589d1a6a471d9bfff888ddcd7b13c" exitCode=0 Dec 11 18:04:28 crc kubenswrapper[4877]: I1211 18:04:28.473714 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkm4z" event={"ID":"0240e5d9-27fa-42fe-8cab-63a80897677e","Type":"ContainerDied","Data":"79d41cb9ef2d580a197d1cdad924b50a228589d1a6a471d9bfff888ddcd7b13c"} Dec 11 18:04:28 crc kubenswrapper[4877]: I1211 18:04:28.475768 4877 generic.go:334] "Generic (PLEG): container finished" podID="f66bb5cf-8e84-4f07-bfc0-358fee62eda4" containerID="a96e9e9a09e94ee397954ec440b7ae3dd161b74adcfe29e1c5b74d3d56f8448c" exitCode=0 Dec 11 18:04:28 crc kubenswrapper[4877]: I1211 18:04:28.476394 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7zms" event={"ID":"f66bb5cf-8e84-4f07-bfc0-358fee62eda4","Type":"ContainerDied","Data":"a96e9e9a09e94ee397954ec440b7ae3dd161b74adcfe29e1c5b74d3d56f8448c"} Dec 11 18:04:28 crc kubenswrapper[4877]: I1211 18:04:28.688863 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cxf6j" podUID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" containerName="registry-server" probeResult="failure" output=< Dec 11 18:04:28 crc kubenswrapper[4877]: timeout: failed to connect service ":50051" within 1s Dec 11 18:04:28 crc kubenswrapper[4877]: > Dec 11 18:04:30 crc kubenswrapper[4877]: I1211 18:04:30.502480 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rstw" event={"ID":"5400698a-74be-440a-9e76-ae18bc00d85b","Type":"ContainerStarted","Data":"d1c8e65c2738f5bdf9928d612909874c008d44f0eb9e2e952f0a896697da9b8f"} Dec 11 18:04:30 crc kubenswrapper[4877]: I1211 18:04:30.525216 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2rstw" podStartSLOduration=4.4474685990000005 podStartE2EDuration="1m23.525193231s" podCreationTimestamp="2025-12-11 18:03:07 +0000 UTC" firstStartedPulling="2025-12-11 18:03:10.407171979 +0000 UTC m=+151.433416023" lastFinishedPulling="2025-12-11 18:04:29.484896601 +0000 UTC m=+230.511140655" observedRunningTime="2025-12-11 18:04:30.522262193 +0000 UTC m=+231.548506257" watchObservedRunningTime="2025-12-11 18:04:30.525193231 +0000 UTC m=+231.551437275" Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.246334 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.269876 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.269929 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.314989 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.315072 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.616050 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.908803 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cxf6j"] Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.916144 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tq2vm"] Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.929645 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2rstw"] Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.936891 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pvh2t"] Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.951658 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k2xpd"] Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.951940 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" podUID="fa0265e5-9837-4f97-891a-703b0e440df3" containerName="marketplace-operator" containerID="cri-o://ce93d487a3b4c0fc8502eff471dc55d24616121f13c176a303fe1926aca1010e" gracePeriod=30 Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.959980 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fwxl"] Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.973803 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhhb6"] Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.981176 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4qvdz"] Dec 11 18:04:38 crc kubenswrapper[4877]: E1211 18:04:38.981464 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed" containerName="pruner" Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.981480 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed" containerName="pruner" Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.981577 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a9adaa-ec86-4c9e-8dfa-09d8559ae8ed" containerName="pruner" Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.982023 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.986143 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m7zms"] Dec 11 18:04:38 crc kubenswrapper[4877]: I1211 18:04:38.997188 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xkm4z"] Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.004961 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4qvdz"] Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.028214 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1eaf037c-b9a9-4c1b-b108-0ffcad610322-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4qvdz\" (UID: \"1eaf037c-b9a9-4c1b-b108-0ffcad610322\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.028276 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt8g7\" (UniqueName: \"kubernetes.io/projected/1eaf037c-b9a9-4c1b-b108-0ffcad610322-kube-api-access-mt8g7\") pod \"marketplace-operator-79b997595-4qvdz\" (UID: \"1eaf037c-b9a9-4c1b-b108-0ffcad610322\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.028317 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1eaf037c-b9a9-4c1b-b108-0ffcad610322-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4qvdz\" (UID: \"1eaf037c-b9a9-4c1b-b108-0ffcad610322\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.129444 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1eaf037c-b9a9-4c1b-b108-0ffcad610322-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4qvdz\" (UID: \"1eaf037c-b9a9-4c1b-b108-0ffcad610322\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.130397 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt8g7\" (UniqueName: \"kubernetes.io/projected/1eaf037c-b9a9-4c1b-b108-0ffcad610322-kube-api-access-mt8g7\") pod \"marketplace-operator-79b997595-4qvdz\" (UID: \"1eaf037c-b9a9-4c1b-b108-0ffcad610322\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.130565 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1eaf037c-b9a9-4c1b-b108-0ffcad610322-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4qvdz\" (UID: \"1eaf037c-b9a9-4c1b-b108-0ffcad610322\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.133001 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1eaf037c-b9a9-4c1b-b108-0ffcad610322-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4qvdz\" (UID: \"1eaf037c-b9a9-4c1b-b108-0ffcad610322\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.141431 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1eaf037c-b9a9-4c1b-b108-0ffcad610322-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4qvdz\" (UID: \"1eaf037c-b9a9-4c1b-b108-0ffcad610322\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.150792 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt8g7\" (UniqueName: \"kubernetes.io/projected/1eaf037c-b9a9-4c1b-b108-0ffcad610322-kube-api-access-mt8g7\") pod \"marketplace-operator-79b997595-4qvdz\" (UID: \"1eaf037c-b9a9-4c1b-b108-0ffcad610322\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.316689 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.511263 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kqnqb"] Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.616634 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvh2t" event={"ID":"2f3a3c00-9466-4501-b1ee-12d677fc3b4e","Type":"ContainerStarted","Data":"067537b7d361b736a84f9f67d76cad959770f1618f80fe6f28fe188e34d77083"} Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.617059 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pvh2t" podUID="2f3a3c00-9466-4501-b1ee-12d677fc3b4e" containerName="registry-server" containerID="cri-o://067537b7d361b736a84f9f67d76cad959770f1618f80fe6f28fe188e34d77083" gracePeriod=30 Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.655798 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fwxl" event={"ID":"6c00914d-bacb-4706-9ca8-8897c4c0a544","Type":"ContainerStarted","Data":"e6208ac294267f3ae676de1f9da46e8304d4e464ebf2136b2266661c1b61c640"} Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.656017 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7fwxl" podUID="6c00914d-bacb-4706-9ca8-8897c4c0a544" containerName="registry-server" containerID="cri-o://e6208ac294267f3ae676de1f9da46e8304d4e464ebf2136b2266661c1b61c640" gracePeriod=30 Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.678863 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pvh2t" podStartSLOduration=3.5583729 podStartE2EDuration="1m32.678824669s" podCreationTimestamp="2025-12-11 18:03:07 +0000 UTC" firstStartedPulling="2025-12-11 18:03:09.281860883 +0000 UTC m=+150.308104937" lastFinishedPulling="2025-12-11 18:04:38.402312652 +0000 UTC m=+239.428556706" observedRunningTime="2025-12-11 18:04:39.670105468 +0000 UTC m=+240.696349522" watchObservedRunningTime="2025-12-11 18:04:39.678824669 +0000 UTC m=+240.705068703" Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.698063 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkm4z" event={"ID":"0240e5d9-27fa-42fe-8cab-63a80897677e","Type":"ContainerStarted","Data":"52c0537c3fbe34668745998ab0dafe6a162eb1b3cb9518595fa4ea05c8a55138"} Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.698292 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xkm4z" podUID="0240e5d9-27fa-42fe-8cab-63a80897677e" containerName="registry-server" containerID="cri-o://52c0537c3fbe34668745998ab0dafe6a162eb1b3cb9518595fa4ea05c8a55138" gracePeriod=30 Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.710692 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhhb6" event={"ID":"a839141a-cee3-4d0c-bfda-c1fb36ee04fd","Type":"ContainerStarted","Data":"0a9321785bf0a62f942667c38e971e32a6d07031c3c41bb4079cb65ba386d91b"} Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.710923 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rhhb6" podUID="a839141a-cee3-4d0c-bfda-c1fb36ee04fd" containerName="registry-server" containerID="cri-o://0a9321785bf0a62f942667c38e971e32a6d07031c3c41bb4079cb65ba386d91b" gracePeriod=30 Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.732390 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7fwxl" podStartSLOduration=3.929767988 podStartE2EDuration="1m30.727226185s" podCreationTimestamp="2025-12-11 18:03:09 +0000 UTC" firstStartedPulling="2025-12-11 18:03:11.532802444 +0000 UTC m=+152.559046488" lastFinishedPulling="2025-12-11 18:04:38.330260641 +0000 UTC m=+239.356504685" observedRunningTime="2025-12-11 18:04:39.707903799 +0000 UTC m=+240.734147843" watchObservedRunningTime="2025-12-11 18:04:39.727226185 +0000 UTC m=+240.753470219" Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.742581 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq2vm" event={"ID":"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc","Type":"ContainerStarted","Data":"ffe56fab9407d956234b1b0fd230b7a82095b4aec7dd203214b9ecf1d3c0cf4d"} Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.742951 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tq2vm" podUID="d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" containerName="registry-server" containerID="cri-o://ffe56fab9407d956234b1b0fd230b7a82095b4aec7dd203214b9ecf1d3c0cf4d" gracePeriod=30 Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.751962 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xkm4z" podStartSLOduration=3.908821808 podStartE2EDuration="1m29.751943355s" podCreationTimestamp="2025-12-11 18:03:10 +0000 UTC" firstStartedPulling="2025-12-11 18:03:12.540884393 +0000 UTC m=+153.567128447" lastFinishedPulling="2025-12-11 18:04:38.38400595 +0000 UTC m=+239.410249994" observedRunningTime="2025-12-11 18:04:39.750469251 +0000 UTC m=+240.776713305" watchObservedRunningTime="2025-12-11 18:04:39.751943355 +0000 UTC m=+240.778187399" Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.755535 4877 generic.go:334] "Generic (PLEG): container finished" podID="fa0265e5-9837-4f97-891a-703b0e440df3" containerID="ce93d487a3b4c0fc8502eff471dc55d24616121f13c176a303fe1926aca1010e" exitCode=0 Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.755598 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" event={"ID":"fa0265e5-9837-4f97-891a-703b0e440df3","Type":"ContainerDied","Data":"ce93d487a3b4c0fc8502eff471dc55d24616121f13c176a303fe1926aca1010e"} Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.778939 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m7zms" podUID="f66bb5cf-8e84-4f07-bfc0-358fee62eda4" containerName="registry-server" containerID="cri-o://1b77af49b5c0925f9f5942f3df19486431a3999ef19e3edbbc70f7ae612001e5" gracePeriod=30 Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.779469 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7zms" event={"ID":"f66bb5cf-8e84-4f07-bfc0-358fee62eda4","Type":"ContainerStarted","Data":"1b77af49b5c0925f9f5942f3df19486431a3999ef19e3edbbc70f7ae612001e5"} Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.779613 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cxf6j" podUID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" containerName="registry-server" containerID="cri-o://096e971d31bef72574f7a3729545644297ec762e6adba97601fde9fc73363c77" gracePeriod=30 Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.945737 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tq2vm" podStartSLOduration=4.991997804 podStartE2EDuration="1m32.945715704s" podCreationTimestamp="2025-12-11 18:03:07 +0000 UTC" firstStartedPulling="2025-12-11 18:03:10.41467303 +0000 UTC m=+151.440917074" lastFinishedPulling="2025-12-11 18:04:38.36839093 +0000 UTC m=+239.394634974" observedRunningTime="2025-12-11 18:04:39.845058873 +0000 UTC m=+240.871302927" watchObservedRunningTime="2025-12-11 18:04:39.945715704 +0000 UTC m=+240.971959738" Dec 11 18:04:39 crc kubenswrapper[4877]: I1211 18:04:39.963675 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4qvdz"] Dec 11 18:04:40 crc kubenswrapper[4877]: I1211 18:04:40.009417 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rhhb6" podStartSLOduration=4.112513419 podStartE2EDuration="1m31.009358091s" podCreationTimestamp="2025-12-11 18:03:09 +0000 UTC" firstStartedPulling="2025-12-11 18:03:11.487777313 +0000 UTC m=+152.514021357" lastFinishedPulling="2025-12-11 18:04:38.384621985 +0000 UTC m=+239.410866029" observedRunningTime="2025-12-11 18:04:39.963930614 +0000 UTC m=+240.990174658" watchObservedRunningTime="2025-12-11 18:04:40.009358091 +0000 UTC m=+241.035602135" Dec 11 18:04:40 crc kubenswrapper[4877]: I1211 18:04:40.010444 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7fwxl" Dec 11 18:04:40 crc kubenswrapper[4877]: I1211 18:04:40.057634 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rhhb6" Dec 11 18:04:40 crc kubenswrapper[4877]: I1211 18:04:40.289184 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2rstw"] Dec 11 18:04:40 crc kubenswrapper[4877]: I1211 18:04:40.714564 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m7zms" podStartSLOduration=5.871233808 podStartE2EDuration="1m30.714526513s" podCreationTimestamp="2025-12-11 18:03:10 +0000 UTC" firstStartedPulling="2025-12-11 18:03:13.57475184 +0000 UTC m=+154.600995884" lastFinishedPulling="2025-12-11 18:04:38.418044545 +0000 UTC m=+239.444288589" observedRunningTime="2025-12-11 18:04:40.712499416 +0000 UTC m=+241.738743480" watchObservedRunningTime="2025-12-11 18:04:40.714526513 +0000 UTC m=+241.740770557" Dec 11 18:04:40 crc kubenswrapper[4877]: I1211 18:04:40.771906 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xkm4z" Dec 11 18:04:40 crc kubenswrapper[4877]: I1211 18:04:40.784250 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2rstw" podUID="5400698a-74be-440a-9e76-ae18bc00d85b" containerName="registry-server" containerID="cri-o://d1c8e65c2738f5bdf9928d612909874c008d44f0eb9e2e952f0a896697da9b8f" gracePeriod=30 Dec 11 18:04:40 crc kubenswrapper[4877]: I1211 18:04:40.784670 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" event={"ID":"1eaf037c-b9a9-4c1b-b108-0ffcad610322","Type":"ContainerStarted","Data":"7dd9a28885a8e9bab880b43e7e20f18aed156ed5aefc768064350384e7577939"} Dec 11 18:04:40 crc kubenswrapper[4877]: I1211 18:04:40.973956 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m7zms" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.096395 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.185501 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gffzz\" (UniqueName: \"kubernetes.io/projected/fa0265e5-9837-4f97-891a-703b0e440df3-kube-api-access-gffzz\") pod \"fa0265e5-9837-4f97-891a-703b0e440df3\" (UID: \"fa0265e5-9837-4f97-891a-703b0e440df3\") " Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.185579 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa0265e5-9837-4f97-891a-703b0e440df3-marketplace-operator-metrics\") pod \"fa0265e5-9837-4f97-891a-703b0e440df3\" (UID: \"fa0265e5-9837-4f97-891a-703b0e440df3\") " Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.185609 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa0265e5-9837-4f97-891a-703b0e440df3-marketplace-trusted-ca\") pod \"fa0265e5-9837-4f97-891a-703b0e440df3\" (UID: \"fa0265e5-9837-4f97-891a-703b0e440df3\") " Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.186482 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa0265e5-9837-4f97-891a-703b0e440df3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "fa0265e5-9837-4f97-891a-703b0e440df3" (UID: "fa0265e5-9837-4f97-891a-703b0e440df3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.192941 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0265e5-9837-4f97-891a-703b0e440df3-kube-api-access-gffzz" (OuterVolumeSpecName: "kube-api-access-gffzz") pod "fa0265e5-9837-4f97-891a-703b0e440df3" (UID: "fa0265e5-9837-4f97-891a-703b0e440df3"). InnerVolumeSpecName "kube-api-access-gffzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.193258 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa0265e5-9837-4f97-891a-703b0e440df3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "fa0265e5-9837-4f97-891a-703b0e440df3" (UID: "fa0265e5-9837-4f97-891a-703b0e440df3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.286937 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gffzz\" (UniqueName: \"kubernetes.io/projected/fa0265e5-9837-4f97-891a-703b0e440df3-kube-api-access-gffzz\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.286991 4877 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa0265e5-9837-4f97-891a-703b0e440df3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.287005 4877 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa0265e5-9837-4f97-891a-703b0e440df3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.792287 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" event={"ID":"fa0265e5-9837-4f97-891a-703b0e440df3","Type":"ContainerDied","Data":"10fd7678332ea6a77b69c2518e1376525f5dbbd2a8eaa07257dee2d7c90ac23c"} Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.792337 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k2xpd" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.792417 4877 scope.go:117] "RemoveContainer" containerID="ce93d487a3b4c0fc8502eff471dc55d24616121f13c176a303fe1926aca1010e" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.796090 4877 generic.go:334] "Generic (PLEG): container finished" podID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" containerID="096e971d31bef72574f7a3729545644297ec762e6adba97601fde9fc73363c77" exitCode=0 Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.796160 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxf6j" event={"ID":"14091be0-96bc-40cf-ae0e-ba1a5de910ee","Type":"ContainerDied","Data":"096e971d31bef72574f7a3729545644297ec762e6adba97601fde9fc73363c77"} Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.797904 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvh2t_2f3a3c00-9466-4501-b1ee-12d677fc3b4e/registry-server/0.log" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.798600 4877 generic.go:334] "Generic (PLEG): container finished" podID="2f3a3c00-9466-4501-b1ee-12d677fc3b4e" containerID="067537b7d361b736a84f9f67d76cad959770f1618f80fe6f28fe188e34d77083" exitCode=1 Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.798676 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvh2t" event={"ID":"2f3a3c00-9466-4501-b1ee-12d677fc3b4e","Type":"ContainerDied","Data":"067537b7d361b736a84f9f67d76cad959770f1618f80fe6f28fe188e34d77083"} Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.800490 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" event={"ID":"1eaf037c-b9a9-4c1b-b108-0ffcad610322","Type":"ContainerStarted","Data":"496a5ae05d3707ec86019c7b43cbdb883eaebf289e966f79ee533667c21a8720"} Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.801795 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.803692 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rhhb6_a839141a-cee3-4d0c-bfda-c1fb36ee04fd/registry-server/0.log" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.805588 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhhb6" event={"ID":"a839141a-cee3-4d0c-bfda-c1fb36ee04fd","Type":"ContainerDied","Data":"0a9321785bf0a62f942667c38e971e32a6d07031c3c41bb4079cb65ba386d91b"} Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.805716 4877 generic.go:334] "Generic (PLEG): container finished" podID="a839141a-cee3-4d0c-bfda-c1fb36ee04fd" containerID="0a9321785bf0a62f942667c38e971e32a6d07031c3c41bb4079cb65ba386d91b" exitCode=1 Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.810256 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.815275 4877 generic.go:334] "Generic (PLEG): container finished" podID="5400698a-74be-440a-9e76-ae18bc00d85b" containerID="d1c8e65c2738f5bdf9928d612909874c008d44f0eb9e2e952f0a896697da9b8f" exitCode=0 Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.815385 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rstw" event={"ID":"5400698a-74be-440a-9e76-ae18bc00d85b","Type":"ContainerDied","Data":"d1c8e65c2738f5bdf9928d612909874c008d44f0eb9e2e952f0a896697da9b8f"} Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.816637 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k2xpd"] Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.818801 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m7zms_f66bb5cf-8e84-4f07-bfc0-358fee62eda4/registry-server/0.log" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.819540 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k2xpd"] Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.819879 4877 generic.go:334] "Generic (PLEG): container finished" podID="f66bb5cf-8e84-4f07-bfc0-358fee62eda4" containerID="1b77af49b5c0925f9f5942f3df19486431a3999ef19e3edbbc70f7ae612001e5" exitCode=1 Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.819963 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7zms" event={"ID":"f66bb5cf-8e84-4f07-bfc0-358fee62eda4","Type":"ContainerDied","Data":"1b77af49b5c0925f9f5942f3df19486431a3999ef19e3edbbc70f7ae612001e5"} Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.821187 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7fwxl_6c00914d-bacb-4706-9ca8-8897c4c0a544/registry-server/0.log" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.821886 4877 generic.go:334] "Generic (PLEG): container finished" podID="6c00914d-bacb-4706-9ca8-8897c4c0a544" containerID="e6208ac294267f3ae676de1f9da46e8304d4e464ebf2136b2266661c1b61c640" exitCode=1 Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.821910 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fwxl" event={"ID":"6c00914d-bacb-4706-9ca8-8897c4c0a544","Type":"ContainerDied","Data":"e6208ac294267f3ae676de1f9da46e8304d4e464ebf2136b2266661c1b61c640"} Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.823208 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkm4z_0240e5d9-27fa-42fe-8cab-63a80897677e/registry-server/0.log" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.824179 4877 generic.go:334] "Generic (PLEG): container finished" podID="0240e5d9-27fa-42fe-8cab-63a80897677e" containerID="52c0537c3fbe34668745998ab0dafe6a162eb1b3cb9518595fa4ea05c8a55138" exitCode=1 Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.824222 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkm4z" event={"ID":"0240e5d9-27fa-42fe-8cab-63a80897677e","Type":"ContainerDied","Data":"52c0537c3fbe34668745998ab0dafe6a162eb1b3cb9518595fa4ea05c8a55138"} Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.825638 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq2vm_d22d1fa7-6fd8-43c8-94e0-6aa0759394dc/registry-server/0.log" Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.826345 4877 generic.go:334] "Generic (PLEG): container finished" podID="d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" containerID="ffe56fab9407d956234b1b0fd230b7a82095b4aec7dd203214b9ecf1d3c0cf4d" exitCode=1 Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.826413 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq2vm" event={"ID":"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc","Type":"ContainerDied","Data":"ffe56fab9407d956234b1b0fd230b7a82095b4aec7dd203214b9ecf1d3c0cf4d"} Dec 11 18:04:41 crc kubenswrapper[4877]: I1211 18:04:41.839167 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" podStartSLOduration=3.839146007 podStartE2EDuration="3.839146007s" podCreationTimestamp="2025-12-11 18:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:04:41.835590775 +0000 UTC m=+242.861834829" watchObservedRunningTime="2025-12-11 18:04:41.839146007 +0000 UTC m=+242.865390051" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.381776 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rhhb6_a839141a-cee3-4d0c-bfda-c1fb36ee04fd/registry-server/0.log" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.385154 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhhb6" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.503919 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-utilities\") pod \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\" (UID: \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.504017 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-catalog-content\") pod \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\" (UID: \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.504055 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8spn5\" (UniqueName: \"kubernetes.io/projected/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-kube-api-access-8spn5\") pod \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\" (UID: \"a839141a-cee3-4d0c-bfda-c1fb36ee04fd\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.505185 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-utilities" (OuterVolumeSpecName: "utilities") pod "a839141a-cee3-4d0c-bfda-c1fb36ee04fd" (UID: "a839141a-cee3-4d0c-bfda-c1fb36ee04fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.515700 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-kube-api-access-8spn5" (OuterVolumeSpecName: "kube-api-access-8spn5") pod "a839141a-cee3-4d0c-bfda-c1fb36ee04fd" (UID: "a839141a-cee3-4d0c-bfda-c1fb36ee04fd"). InnerVolumeSpecName "kube-api-access-8spn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.552460 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a839141a-cee3-4d0c-bfda-c1fb36ee04fd" (UID: "a839141a-cee3-4d0c-bfda-c1fb36ee04fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.580691 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkm4z_0240e5d9-27fa-42fe-8cab-63a80897677e/registry-server/0.log" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.581578 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xkm4z" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.587618 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m7zms_f66bb5cf-8e84-4f07-bfc0-358fee62eda4/registry-server/0.log" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.588736 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m7zms" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.594386 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvh2t_2f3a3c00-9466-4501-b1ee-12d677fc3b4e/registry-server/0.log" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.595099 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pvh2t" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.606119 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.606151 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.606165 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8spn5\" (UniqueName: \"kubernetes.io/projected/a839141a-cee3-4d0c-bfda-c1fb36ee04fd-kube-api-access-8spn5\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.607732 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.621278 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.645619 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7fwxl_6c00914d-bacb-4706-9ca8-8897c4c0a544/registry-server/0.log" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.646598 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fwxl" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.658914 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq2vm_d22d1fa7-6fd8-43c8-94e0-6aa0759394dc/registry-server/0.log" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.659767 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq2vm" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.707501 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7qpm\" (UniqueName: \"kubernetes.io/projected/14091be0-96bc-40cf-ae0e-ba1a5de910ee-kube-api-access-w7qpm\") pod \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\" (UID: \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.707574 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0240e5d9-27fa-42fe-8cab-63a80897677e-utilities\") pod \"0240e5d9-27fa-42fe-8cab-63a80897677e\" (UID: \"0240e5d9-27fa-42fe-8cab-63a80897677e\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.707623 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-utilities\") pod \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\" (UID: \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.707655 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14091be0-96bc-40cf-ae0e-ba1a5de910ee-catalog-content\") pod \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\" (UID: \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.707684 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5400698a-74be-440a-9e76-ae18bc00d85b-catalog-content\") pod \"5400698a-74be-440a-9e76-ae18bc00d85b\" (UID: \"5400698a-74be-440a-9e76-ae18bc00d85b\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.707717 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0240e5d9-27fa-42fe-8cab-63a80897677e-catalog-content\") pod \"0240e5d9-27fa-42fe-8cab-63a80897677e\" (UID: \"0240e5d9-27fa-42fe-8cab-63a80897677e\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.707763 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktpcs\" (UniqueName: \"kubernetes.io/projected/0240e5d9-27fa-42fe-8cab-63a80897677e-kube-api-access-ktpcs\") pod \"0240e5d9-27fa-42fe-8cab-63a80897677e\" (UID: \"0240e5d9-27fa-42fe-8cab-63a80897677e\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.707824 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14091be0-96bc-40cf-ae0e-ba1a5de910ee-utilities\") pod \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\" (UID: \"14091be0-96bc-40cf-ae0e-ba1a5de910ee\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.707856 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk6dd\" (UniqueName: \"kubernetes.io/projected/6c00914d-bacb-4706-9ca8-8897c4c0a544-kube-api-access-pk6dd\") pod \"6c00914d-bacb-4706-9ca8-8897c4c0a544\" (UID: \"6c00914d-bacb-4706-9ca8-8897c4c0a544\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.707884 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c00914d-bacb-4706-9ca8-8897c4c0a544-utilities\") pod \"6c00914d-bacb-4706-9ca8-8897c4c0a544\" (UID: \"6c00914d-bacb-4706-9ca8-8897c4c0a544\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.707907 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-catalog-content\") pod \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\" (UID: \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.707930 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6khs\" (UniqueName: \"kubernetes.io/projected/5400698a-74be-440a-9e76-ae18bc00d85b-kube-api-access-d6khs\") pod \"5400698a-74be-440a-9e76-ae18bc00d85b\" (UID: \"5400698a-74be-440a-9e76-ae18bc00d85b\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.707973 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-catalog-content\") pod \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\" (UID: \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.708004 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zngg\" (UniqueName: \"kubernetes.io/projected/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-kube-api-access-4zngg\") pod \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\" (UID: \"f66bb5cf-8e84-4f07-bfc0-358fee62eda4\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.708044 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5400698a-74be-440a-9e76-ae18bc00d85b-utilities\") pod \"5400698a-74be-440a-9e76-ae18bc00d85b\" (UID: \"5400698a-74be-440a-9e76-ae18bc00d85b\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.708107 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c00914d-bacb-4706-9ca8-8897c4c0a544-catalog-content\") pod \"6c00914d-bacb-4706-9ca8-8897c4c0a544\" (UID: \"6c00914d-bacb-4706-9ca8-8897c4c0a544\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.708135 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-utilities\") pod \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\" (UID: \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.708161 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl852\" (UniqueName: \"kubernetes.io/projected/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-kube-api-access-jl852\") pod \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\" (UID: \"2f3a3c00-9466-4501-b1ee-12d677fc3b4e\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.708603 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0240e5d9-27fa-42fe-8cab-63a80897677e-utilities" (OuterVolumeSpecName: "utilities") pod "0240e5d9-27fa-42fe-8cab-63a80897677e" (UID: "0240e5d9-27fa-42fe-8cab-63a80897677e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.708633 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-utilities" (OuterVolumeSpecName: "utilities") pod "f66bb5cf-8e84-4f07-bfc0-358fee62eda4" (UID: "f66bb5cf-8e84-4f07-bfc0-358fee62eda4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.709229 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c00914d-bacb-4706-9ca8-8897c4c0a544-utilities" (OuterVolumeSpecName: "utilities") pod "6c00914d-bacb-4706-9ca8-8897c4c0a544" (UID: "6c00914d-bacb-4706-9ca8-8897c4c0a544"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.711612 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14091be0-96bc-40cf-ae0e-ba1a5de910ee-utilities" (OuterVolumeSpecName: "utilities") pod "14091be0-96bc-40cf-ae0e-ba1a5de910ee" (UID: "14091be0-96bc-40cf-ae0e-ba1a5de910ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.713268 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c00914d-bacb-4706-9ca8-8897c4c0a544-kube-api-access-pk6dd" (OuterVolumeSpecName: "kube-api-access-pk6dd") pod "6c00914d-bacb-4706-9ca8-8897c4c0a544" (UID: "6c00914d-bacb-4706-9ca8-8897c4c0a544"). InnerVolumeSpecName "kube-api-access-pk6dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.713388 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-utilities" (OuterVolumeSpecName: "utilities") pod "2f3a3c00-9466-4501-b1ee-12d677fc3b4e" (UID: "2f3a3c00-9466-4501-b1ee-12d677fc3b4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.714579 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5400698a-74be-440a-9e76-ae18bc00d85b-utilities" (OuterVolumeSpecName: "utilities") pod "5400698a-74be-440a-9e76-ae18bc00d85b" (UID: "5400698a-74be-440a-9e76-ae18bc00d85b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.714656 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-kube-api-access-4zngg" (OuterVolumeSpecName: "kube-api-access-4zngg") pod "f66bb5cf-8e84-4f07-bfc0-358fee62eda4" (UID: "f66bb5cf-8e84-4f07-bfc0-358fee62eda4"). InnerVolumeSpecName "kube-api-access-4zngg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.715528 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-kube-api-access-jl852" (OuterVolumeSpecName: "kube-api-access-jl852") pod "2f3a3c00-9466-4501-b1ee-12d677fc3b4e" (UID: "2f3a3c00-9466-4501-b1ee-12d677fc3b4e"). InnerVolumeSpecName "kube-api-access-jl852". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.716838 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0240e5d9-27fa-42fe-8cab-63a80897677e-kube-api-access-ktpcs" (OuterVolumeSpecName: "kube-api-access-ktpcs") pod "0240e5d9-27fa-42fe-8cab-63a80897677e" (UID: "0240e5d9-27fa-42fe-8cab-63a80897677e"). InnerVolumeSpecName "kube-api-access-ktpcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.718550 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5400698a-74be-440a-9e76-ae18bc00d85b-kube-api-access-d6khs" (OuterVolumeSpecName: "kube-api-access-d6khs") pod "5400698a-74be-440a-9e76-ae18bc00d85b" (UID: "5400698a-74be-440a-9e76-ae18bc00d85b"). InnerVolumeSpecName "kube-api-access-d6khs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.729466 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14091be0-96bc-40cf-ae0e-ba1a5de910ee-kube-api-access-w7qpm" (OuterVolumeSpecName: "kube-api-access-w7qpm") pod "14091be0-96bc-40cf-ae0e-ba1a5de910ee" (UID: "14091be0-96bc-40cf-ae0e-ba1a5de910ee"). InnerVolumeSpecName "kube-api-access-w7qpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.788229 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c00914d-bacb-4706-9ca8-8897c4c0a544-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c00914d-bacb-4706-9ca8-8897c4c0a544" (UID: "6c00914d-bacb-4706-9ca8-8897c4c0a544"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.798030 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14091be0-96bc-40cf-ae0e-ba1a5de910ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14091be0-96bc-40cf-ae0e-ba1a5de910ee" (UID: "14091be0-96bc-40cf-ae0e-ba1a5de910ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.809501 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-catalog-content\") pod \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\" (UID: \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.809987 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qplr\" (UniqueName: \"kubernetes.io/projected/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-kube-api-access-7qplr\") pod \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\" (UID: \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.810044 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-utilities\") pod \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\" (UID: \"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc\") " Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.810340 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14091be0-96bc-40cf-ae0e-ba1a5de910ee-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.811128 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0240e5d9-27fa-42fe-8cab-63a80897677e-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.811151 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.811191 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktpcs\" (UniqueName: \"kubernetes.io/projected/0240e5d9-27fa-42fe-8cab-63a80897677e-kube-api-access-ktpcs\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.811209 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14091be0-96bc-40cf-ae0e-ba1a5de910ee-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.811221 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk6dd\" (UniqueName: \"kubernetes.io/projected/6c00914d-bacb-4706-9ca8-8897c4c0a544-kube-api-access-pk6dd\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.811231 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c00914d-bacb-4706-9ca8-8897c4c0a544-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.811243 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6khs\" (UniqueName: \"kubernetes.io/projected/5400698a-74be-440a-9e76-ae18bc00d85b-kube-api-access-d6khs\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.811254 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zngg\" (UniqueName: \"kubernetes.io/projected/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-kube-api-access-4zngg\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.811263 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5400698a-74be-440a-9e76-ae18bc00d85b-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.811271 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c00914d-bacb-4706-9ca8-8897c4c0a544-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.811280 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.811289 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl852\" (UniqueName: \"kubernetes.io/projected/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-kube-api-access-jl852\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.811297 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7qpm\" (UniqueName: \"kubernetes.io/projected/14091be0-96bc-40cf-ae0e-ba1a5de910ee-kube-api-access-w7qpm\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.812460 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-utilities" (OuterVolumeSpecName: "utilities") pod "d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" (UID: "d22d1fa7-6fd8-43c8-94e0-6aa0759394dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.816735 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5400698a-74be-440a-9e76-ae18bc00d85b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5400698a-74be-440a-9e76-ae18bc00d85b" (UID: "5400698a-74be-440a-9e76-ae18bc00d85b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.820576 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-kube-api-access-7qplr" (OuterVolumeSpecName: "kube-api-access-7qplr") pod "d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" (UID: "d22d1fa7-6fd8-43c8-94e0-6aa0759394dc"). InnerVolumeSpecName "kube-api-access-7qplr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.834572 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxf6j" event={"ID":"14091be0-96bc-40cf-ae0e-ba1a5de910ee","Type":"ContainerDied","Data":"a27b3b91f77aa9a891ea83fae5db7b18047da23de15e95ab729d513f009f7242"} Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.834647 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxf6j" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.834665 4877 scope.go:117] "RemoveContainer" containerID="096e971d31bef72574f7a3729545644297ec762e6adba97601fde9fc73363c77" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.836346 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xkm4z_0240e5d9-27fa-42fe-8cab-63a80897677e/registry-server/0.log" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.838227 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xkm4z" event={"ID":"0240e5d9-27fa-42fe-8cab-63a80897677e","Type":"ContainerDied","Data":"d486941a07b0992e46b6996848cc9a3034a1112f191b4f15221cbe009b40c2fc"} Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.838611 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xkm4z" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.840290 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f3a3c00-9466-4501-b1ee-12d677fc3b4e" (UID: "2f3a3c00-9466-4501-b1ee-12d677fc3b4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.849406 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m7zms_f66bb5cf-8e84-4f07-bfc0-358fee62eda4/registry-server/0.log" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.852567 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m7zms" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.852493 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m7zms" event={"ID":"f66bb5cf-8e84-4f07-bfc0-358fee62eda4","Type":"ContainerDied","Data":"2cc00e21e5e36f62475fcc2081a9ec709aa41a625a676f0bad087b6aebd68cb3"} Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.854558 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvh2t_2f3a3c00-9466-4501-b1ee-12d677fc3b4e/registry-server/0.log" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.855164 4877 scope.go:117] "RemoveContainer" containerID="e6a1e5d3f441d0dc8cd77222482cb3dd45ece2a3752c1ddcf697ff4681311b77" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.855473 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvh2t" event={"ID":"2f3a3c00-9466-4501-b1ee-12d677fc3b4e","Type":"ContainerDied","Data":"3771f30b8ac7e181239d9669a1c3b69438c3a2e24301d75b8504c324e6964b41"} Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.855641 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pvh2t" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.862274 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7fwxl_6c00914d-bacb-4706-9ca8-8897c4c0a544/registry-server/0.log" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.864711 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fwxl" event={"ID":"6c00914d-bacb-4706-9ca8-8897c4c0a544","Type":"ContainerDied","Data":"e733cf8b2e093e65e03f07ace9f1c31753b9fd6709e7d870929cd61127c601ef"} Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.864971 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fwxl" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.873616 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cxf6j"] Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.876733 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rhhb6_a839141a-cee3-4d0c-bfda-c1fb36ee04fd/registry-server/0.log" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.877280 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cxf6j"] Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.877755 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhhb6" event={"ID":"a839141a-cee3-4d0c-bfda-c1fb36ee04fd","Type":"ContainerDied","Data":"57a8b912feeb3c55b25973d7b2e76ee40d1adc0ee5655dc8ae001eb41998f72b"} Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.878057 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhhb6" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.885791 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0240e5d9-27fa-42fe-8cab-63a80897677e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0240e5d9-27fa-42fe-8cab-63a80897677e" (UID: "0240e5d9-27fa-42fe-8cab-63a80897677e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.887884 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f66bb5cf-8e84-4f07-bfc0-358fee62eda4" (UID: "f66bb5cf-8e84-4f07-bfc0-358fee62eda4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.887987 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" (UID: "d22d1fa7-6fd8-43c8-94e0-6aa0759394dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.888591 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq2vm_d22d1fa7-6fd8-43c8-94e0-6aa0759394dc/registry-server/0.log" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.889537 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq2vm" event={"ID":"d22d1fa7-6fd8-43c8-94e0-6aa0759394dc","Type":"ContainerDied","Data":"7be2b2e48b9662450fb54300afab0abe66b9927640459eaa75105832d93afc16"} Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.889711 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq2vm" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.892905 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rstw" event={"ID":"5400698a-74be-440a-9e76-ae18bc00d85b","Type":"ContainerDied","Data":"10fd2501183029889704f685a7fb7f0c0bc1d4ce500e3aa8c86d5dcddd5029bb"} Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.893179 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rstw" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.909114 4877 scope.go:117] "RemoveContainer" containerID="372f930116a6abf8ee14b666ad7499a82306bbdf49bacb074c5545f492c07d5a" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.913566 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5400698a-74be-440a-9e76-ae18bc00d85b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.913625 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0240e5d9-27fa-42fe-8cab-63a80897677e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.913639 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qplr\" (UniqueName: \"kubernetes.io/projected/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-kube-api-access-7qplr\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.913653 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.913665 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66bb5cf-8e84-4f07-bfc0-358fee62eda4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.913673 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3a3c00-9466-4501-b1ee-12d677fc3b4e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.913685 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.928858 4877 scope.go:117] "RemoveContainer" containerID="52c0537c3fbe34668745998ab0dafe6a162eb1b3cb9518595fa4ea05c8a55138" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.940044 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pvh2t"] Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.944838 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pvh2t"] Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.963598 4877 scope.go:117] "RemoveContainer" containerID="79d41cb9ef2d580a197d1cdad924b50a228589d1a6a471d9bfff888ddcd7b13c" Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.977034 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tq2vm"] Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.984495 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tq2vm"] Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.990007 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fwxl"] Dec 11 18:04:42 crc kubenswrapper[4877]: I1211 18:04:42.993359 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fwxl"] Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.007047 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2rstw"] Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.010503 4877 scope.go:117] "RemoveContainer" containerID="0b9edceaa0952b798bc31c5715e0efb305e090709c19cb1dd3d6cdbeda09c27a" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.012150 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2rstw"] Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.020407 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhhb6"] Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.023344 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhhb6"] Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.029884 4877 scope.go:117] "RemoveContainer" containerID="1b77af49b5c0925f9f5942f3df19486431a3999ef19e3edbbc70f7ae612001e5" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.044676 4877 scope.go:117] "RemoveContainer" containerID="a96e9e9a09e94ee397954ec440b7ae3dd161b74adcfe29e1c5b74d3d56f8448c" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.060938 4877 scope.go:117] "RemoveContainer" containerID="d252fc6f3bf8204929c493eed937350270b019751e5b15c5f4daf0634bba9dc4" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.075173 4877 scope.go:117] "RemoveContainer" containerID="067537b7d361b736a84f9f67d76cad959770f1618f80fe6f28fe188e34d77083" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.088158 4877 scope.go:117] "RemoveContainer" containerID="1636e81aaab51808757aac0a37ee4e5277b4d44aa1a0d180bcdff08643405bca" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.101454 4877 scope.go:117] "RemoveContainer" containerID="2afe35ee1834b4918f1ef2bfe7e4f132f3952eab6b7cf62a1168dd4a82dc007a" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.117997 4877 scope.go:117] "RemoveContainer" containerID="e6208ac294267f3ae676de1f9da46e8304d4e464ebf2136b2266661c1b61c640" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.131327 4877 scope.go:117] "RemoveContainer" containerID="ee32b0264544ea0234ed569037e86aec478dc7493c7c8f5e34e3110716eba3e1" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.144632 4877 scope.go:117] "RemoveContainer" containerID="bc58a2b1e97deac1b00f40e04e48d762a557afcb1a858c5c8d322979d04d8d4c" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.159556 4877 scope.go:117] "RemoveContainer" containerID="0a9321785bf0a62f942667c38e971e32a6d07031c3c41bb4079cb65ba386d91b" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.171972 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xkm4z"] Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.175312 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xkm4z"] Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.182137 4877 scope.go:117] "RemoveContainer" containerID="f08accf5fc2f7287b21a895b10ec2571a496a9d2c35fb9e3715dfb46072974c6" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.189650 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m7zms"] Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.192393 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m7zms"] Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.202286 4877 scope.go:117] "RemoveContainer" containerID="d2bc79b5ed0c1a395527fad1a24824ae9167d6bbd43c88f1da3c8fabe0fbd6f4" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.216530 4877 scope.go:117] "RemoveContainer" containerID="ffe56fab9407d956234b1b0fd230b7a82095b4aec7dd203214b9ecf1d3c0cf4d" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.226237 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0240e5d9-27fa-42fe-8cab-63a80897677e" path="/var/lib/kubelet/pods/0240e5d9-27fa-42fe-8cab-63a80897677e/volumes" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.227330 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" path="/var/lib/kubelet/pods/14091be0-96bc-40cf-ae0e-ba1a5de910ee/volumes" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.228102 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f3a3c00-9466-4501-b1ee-12d677fc3b4e" path="/var/lib/kubelet/pods/2f3a3c00-9466-4501-b1ee-12d677fc3b4e/volumes" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.229417 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5400698a-74be-440a-9e76-ae18bc00d85b" path="/var/lib/kubelet/pods/5400698a-74be-440a-9e76-ae18bc00d85b/volumes" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.230140 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c00914d-bacb-4706-9ca8-8897c4c0a544" path="/var/lib/kubelet/pods/6c00914d-bacb-4706-9ca8-8897c4c0a544/volumes" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.231613 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a839141a-cee3-4d0c-bfda-c1fb36ee04fd" path="/var/lib/kubelet/pods/a839141a-cee3-4d0c-bfda-c1fb36ee04fd/volumes" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.232326 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" path="/var/lib/kubelet/pods/d22d1fa7-6fd8-43c8-94e0-6aa0759394dc/volumes" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.232320 4877 scope.go:117] "RemoveContainer" containerID="2af8b20e144e27fee63938afe07e27cafea4452d82e45f1c8dfb0bdce510cb7b" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.233065 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66bb5cf-8e84-4f07-bfc0-358fee62eda4" path="/var/lib/kubelet/pods/f66bb5cf-8e84-4f07-bfc0-358fee62eda4/volumes" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.234596 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa0265e5-9837-4f97-891a-703b0e440df3" path="/var/lib/kubelet/pods/fa0265e5-9837-4f97-891a-703b0e440df3/volumes" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.253153 4877 scope.go:117] "RemoveContainer" containerID="b209992be05894e18b746508f8b9893b155758edee63613290ebdcda01f928c3" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.267074 4877 scope.go:117] "RemoveContainer" containerID="d1c8e65c2738f5bdf9928d612909874c008d44f0eb9e2e952f0a896697da9b8f" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.280748 4877 scope.go:117] "RemoveContainer" containerID="d85c34bc0ade141bc8751d0ec4ca713717da59739d65350d5f5549f1cfaf661b" Dec 11 18:04:43 crc kubenswrapper[4877]: I1211 18:04:43.295612 4877 scope.go:117] "RemoveContainer" containerID="50299776091dbe50e5fe02fda2d93a75d099ad2f904d2c0a7c317c9e1de12fbf" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.688188 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5924z"] Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689028 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c00914d-bacb-4706-9ca8-8897c4c0a544" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689046 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c00914d-bacb-4706-9ca8-8897c4c0a544" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689059 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0240e5d9-27fa-42fe-8cab-63a80897677e" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689065 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="0240e5d9-27fa-42fe-8cab-63a80897677e" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689075 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66bb5cf-8e84-4f07-bfc0-358fee62eda4" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689082 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66bb5cf-8e84-4f07-bfc0-358fee62eda4" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689090 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5400698a-74be-440a-9e76-ae18bc00d85b" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689096 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="5400698a-74be-440a-9e76-ae18bc00d85b" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689104 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3a3c00-9466-4501-b1ee-12d677fc3b4e" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689111 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3a3c00-9466-4501-b1ee-12d677fc3b4e" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689118 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3a3c00-9466-4501-b1ee-12d677fc3b4e" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689124 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3a3c00-9466-4501-b1ee-12d677fc3b4e" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689133 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0265e5-9837-4f97-891a-703b0e440df3" containerName="marketplace-operator" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689140 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0265e5-9837-4f97-891a-703b0e440df3" containerName="marketplace-operator" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689151 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c00914d-bacb-4706-9ca8-8897c4c0a544" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689157 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c00914d-bacb-4706-9ca8-8897c4c0a544" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689164 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689169 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689176 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a839141a-cee3-4d0c-bfda-c1fb36ee04fd" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689182 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="a839141a-cee3-4d0c-bfda-c1fb36ee04fd" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689190 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c00914d-bacb-4706-9ca8-8897c4c0a544" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689196 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c00914d-bacb-4706-9ca8-8897c4c0a544" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689204 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66bb5cf-8e84-4f07-bfc0-358fee62eda4" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689210 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66bb5cf-8e84-4f07-bfc0-358fee62eda4" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689218 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a839141a-cee3-4d0c-bfda-c1fb36ee04fd" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689224 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="a839141a-cee3-4d0c-bfda-c1fb36ee04fd" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689234 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689241 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689248 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0240e5d9-27fa-42fe-8cab-63a80897677e" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689255 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="0240e5d9-27fa-42fe-8cab-63a80897677e" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689264 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689269 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689279 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66bb5cf-8e84-4f07-bfc0-358fee62eda4" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689286 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66bb5cf-8e84-4f07-bfc0-358fee62eda4" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689296 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3a3c00-9466-4501-b1ee-12d677fc3b4e" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689302 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3a3c00-9466-4501-b1ee-12d677fc3b4e" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689310 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5400698a-74be-440a-9e76-ae18bc00d85b" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689316 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="5400698a-74be-440a-9e76-ae18bc00d85b" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689322 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689327 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689335 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689341 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689348 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5400698a-74be-440a-9e76-ae18bc00d85b" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689354 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="5400698a-74be-440a-9e76-ae18bc00d85b" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689361 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a839141a-cee3-4d0c-bfda-c1fb36ee04fd" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689366 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="a839141a-cee3-4d0c-bfda-c1fb36ee04fd" containerName="extract-utilities" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689402 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689408 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: E1211 18:04:44.689416 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0240e5d9-27fa-42fe-8cab-63a80897677e" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689422 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="0240e5d9-27fa-42fe-8cab-63a80897677e" containerName="extract-content" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689516 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="5400698a-74be-440a-9e76-ae18bc00d85b" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689528 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa0265e5-9837-4f97-891a-703b0e440df3" containerName="marketplace-operator" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689538 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c00914d-bacb-4706-9ca8-8897c4c0a544" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689545 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3a3c00-9466-4501-b1ee-12d677fc3b4e" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689556 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="0240e5d9-27fa-42fe-8cab-63a80897677e" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689564 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22d1fa7-6fd8-43c8-94e0-6aa0759394dc" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689572 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="a839141a-cee3-4d0c-bfda-c1fb36ee04fd" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689581 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66bb5cf-8e84-4f07-bfc0-358fee62eda4" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.689589 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="14091be0-96bc-40cf-ae0e-ba1a5de910ee" containerName="registry-server" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.690388 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5924z" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.693313 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.700637 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5924z"] Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.740570 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0250f8a-6910-4bd8-a583-f772807319f1-catalog-content\") pod \"community-operators-5924z\" (UID: \"d0250f8a-6910-4bd8-a583-f772807319f1\") " pod="openshift-marketplace/community-operators-5924z" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.740651 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0250f8a-6910-4bd8-a583-f772807319f1-utilities\") pod \"community-operators-5924z\" (UID: \"d0250f8a-6910-4bd8-a583-f772807319f1\") " pod="openshift-marketplace/community-operators-5924z" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.740744 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-224jz\" (UniqueName: \"kubernetes.io/projected/d0250f8a-6910-4bd8-a583-f772807319f1-kube-api-access-224jz\") pod \"community-operators-5924z\" (UID: \"d0250f8a-6910-4bd8-a583-f772807319f1\") " pod="openshift-marketplace/community-operators-5924z" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.842564 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0250f8a-6910-4bd8-a583-f772807319f1-utilities\") pod \"community-operators-5924z\" (UID: \"d0250f8a-6910-4bd8-a583-f772807319f1\") " pod="openshift-marketplace/community-operators-5924z" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.843008 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-224jz\" (UniqueName: \"kubernetes.io/projected/d0250f8a-6910-4bd8-a583-f772807319f1-kube-api-access-224jz\") pod \"community-operators-5924z\" (UID: \"d0250f8a-6910-4bd8-a583-f772807319f1\") " pod="openshift-marketplace/community-operators-5924z" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.843057 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0250f8a-6910-4bd8-a583-f772807319f1-utilities\") pod \"community-operators-5924z\" (UID: \"d0250f8a-6910-4bd8-a583-f772807319f1\") " pod="openshift-marketplace/community-operators-5924z" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.843077 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0250f8a-6910-4bd8-a583-f772807319f1-catalog-content\") pod \"community-operators-5924z\" (UID: \"d0250f8a-6910-4bd8-a583-f772807319f1\") " pod="openshift-marketplace/community-operators-5924z" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.843937 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0250f8a-6910-4bd8-a583-f772807319f1-catalog-content\") pod \"community-operators-5924z\" (UID: \"d0250f8a-6910-4bd8-a583-f772807319f1\") " pod="openshift-marketplace/community-operators-5924z" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.863021 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-224jz\" (UniqueName: \"kubernetes.io/projected/d0250f8a-6910-4bd8-a583-f772807319f1-kube-api-access-224jz\") pod \"community-operators-5924z\" (UID: \"d0250f8a-6910-4bd8-a583-f772807319f1\") " pod="openshift-marketplace/community-operators-5924z" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.895531 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jcwhn"] Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.897031 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.899756 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.902497 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jcwhn"] Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.945015 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq7gh\" (UniqueName: \"kubernetes.io/projected/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-kube-api-access-jq7gh\") pod \"certified-operators-jcwhn\" (UID: \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\") " pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.945085 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-catalog-content\") pod \"certified-operators-jcwhn\" (UID: \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\") " pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:04:44 crc kubenswrapper[4877]: I1211 18:04:44.945147 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-utilities\") pod \"certified-operators-jcwhn\" (UID: \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\") " pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:04:45 crc kubenswrapper[4877]: I1211 18:04:45.017103 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5924z" Dec 11 18:04:45 crc kubenswrapper[4877]: I1211 18:04:45.047026 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-catalog-content\") pod \"certified-operators-jcwhn\" (UID: \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\") " pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:04:45 crc kubenswrapper[4877]: I1211 18:04:45.047170 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-utilities\") pod \"certified-operators-jcwhn\" (UID: \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\") " pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:04:45 crc kubenswrapper[4877]: I1211 18:04:45.047242 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq7gh\" (UniqueName: \"kubernetes.io/projected/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-kube-api-access-jq7gh\") pod \"certified-operators-jcwhn\" (UID: \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\") " pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:04:45 crc kubenswrapper[4877]: I1211 18:04:45.047638 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-catalog-content\") pod \"certified-operators-jcwhn\" (UID: \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\") " pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:04:45 crc kubenswrapper[4877]: I1211 18:04:45.047698 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-utilities\") pod \"certified-operators-jcwhn\" (UID: \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\") " pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:04:45 crc kubenswrapper[4877]: I1211 18:04:45.068519 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq7gh\" (UniqueName: \"kubernetes.io/projected/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-kube-api-access-jq7gh\") pod \"certified-operators-jcwhn\" (UID: \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\") " pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:04:45 crc kubenswrapper[4877]: I1211 18:04:45.321434 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:04:45 crc kubenswrapper[4877]: I1211 18:04:45.427221 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5924z"] Dec 11 18:04:45 crc kubenswrapper[4877]: I1211 18:04:45.541024 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jcwhn"] Dec 11 18:04:45 crc kubenswrapper[4877]: W1211 18:04:45.549289 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8824e39f_4b4b_4f34_9a4c_2f5f2c71d16d.slice/crio-eeed259e3d616002540f97f583edae4ad0aff1e9581cd4c06ced49ee4d01b9f4 WatchSource:0}: Error finding container eeed259e3d616002540f97f583edae4ad0aff1e9581cd4c06ced49ee4d01b9f4: Status 404 returned error can't find the container with id eeed259e3d616002540f97f583edae4ad0aff1e9581cd4c06ced49ee4d01b9f4 Dec 11 18:04:45 crc kubenswrapper[4877]: I1211 18:04:45.935726 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5924z" event={"ID":"d0250f8a-6910-4bd8-a583-f772807319f1","Type":"ContainerStarted","Data":"a23cbadc0935fa7d5ab90d541e018922ea9efdb29ffb7225f02470ee3d780152"} Dec 11 18:04:45 crc kubenswrapper[4877]: I1211 18:04:45.938030 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcwhn" event={"ID":"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d","Type":"ContainerStarted","Data":"eeed259e3d616002540f97f583edae4ad0aff1e9581cd4c06ced49ee4d01b9f4"} Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.094703 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cbv6k"] Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.105761 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.114365 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.120801 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbv6k"] Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.245456 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7qch\" (UniqueName: \"kubernetes.io/projected/c1003f02-48c3-4729-8720-0e23ffb4b8dd-kube-api-access-s7qch\") pod \"redhat-marketplace-cbv6k\" (UID: \"c1003f02-48c3-4729-8720-0e23ffb4b8dd\") " pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.245515 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1003f02-48c3-4729-8720-0e23ffb4b8dd-utilities\") pod \"redhat-marketplace-cbv6k\" (UID: \"c1003f02-48c3-4729-8720-0e23ffb4b8dd\") " pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.245630 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1003f02-48c3-4729-8720-0e23ffb4b8dd-catalog-content\") pod \"redhat-marketplace-cbv6k\" (UID: \"c1003f02-48c3-4729-8720-0e23ffb4b8dd\") " pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.287744 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4f422"] Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.291492 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.294565 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.305823 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4f422"] Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.347140 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1003f02-48c3-4729-8720-0e23ffb4b8dd-catalog-content\") pod \"redhat-marketplace-cbv6k\" (UID: \"c1003f02-48c3-4729-8720-0e23ffb4b8dd\") " pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.347262 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7qch\" (UniqueName: \"kubernetes.io/projected/c1003f02-48c3-4729-8720-0e23ffb4b8dd-kube-api-access-s7qch\") pod \"redhat-marketplace-cbv6k\" (UID: \"c1003f02-48c3-4729-8720-0e23ffb4b8dd\") " pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.347287 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1003f02-48c3-4729-8720-0e23ffb4b8dd-utilities\") pod \"redhat-marketplace-cbv6k\" (UID: \"c1003f02-48c3-4729-8720-0e23ffb4b8dd\") " pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.347863 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1003f02-48c3-4729-8720-0e23ffb4b8dd-utilities\") pod \"redhat-marketplace-cbv6k\" (UID: \"c1003f02-48c3-4729-8720-0e23ffb4b8dd\") " pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.348092 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1003f02-48c3-4729-8720-0e23ffb4b8dd-catalog-content\") pod \"redhat-marketplace-cbv6k\" (UID: \"c1003f02-48c3-4729-8720-0e23ffb4b8dd\") " pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.374398 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7qch\" (UniqueName: \"kubernetes.io/projected/c1003f02-48c3-4729-8720-0e23ffb4b8dd-kube-api-access-s7qch\") pod \"redhat-marketplace-cbv6k\" (UID: \"c1003f02-48c3-4729-8720-0e23ffb4b8dd\") " pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.444704 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.448783 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22458b44-1dc4-447a-9e42-d5da68cc0e26-catalog-content\") pod \"redhat-operators-4f422\" (UID: \"22458b44-1dc4-447a-9e42-d5da68cc0e26\") " pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.448858 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9sp8\" (UniqueName: \"kubernetes.io/projected/22458b44-1dc4-447a-9e42-d5da68cc0e26-kube-api-access-s9sp8\") pod \"redhat-operators-4f422\" (UID: \"22458b44-1dc4-447a-9e42-d5da68cc0e26\") " pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.448906 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22458b44-1dc4-447a-9e42-d5da68cc0e26-utilities\") pod \"redhat-operators-4f422\" (UID: \"22458b44-1dc4-447a-9e42-d5da68cc0e26\") " pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.550566 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22458b44-1dc4-447a-9e42-d5da68cc0e26-utilities\") pod \"redhat-operators-4f422\" (UID: \"22458b44-1dc4-447a-9e42-d5da68cc0e26\") " pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.551077 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22458b44-1dc4-447a-9e42-d5da68cc0e26-catalog-content\") pod \"redhat-operators-4f422\" (UID: \"22458b44-1dc4-447a-9e42-d5da68cc0e26\") " pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.551131 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9sp8\" (UniqueName: \"kubernetes.io/projected/22458b44-1dc4-447a-9e42-d5da68cc0e26-kube-api-access-s9sp8\") pod \"redhat-operators-4f422\" (UID: \"22458b44-1dc4-447a-9e42-d5da68cc0e26\") " pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.551484 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22458b44-1dc4-447a-9e42-d5da68cc0e26-utilities\") pod \"redhat-operators-4f422\" (UID: \"22458b44-1dc4-447a-9e42-d5da68cc0e26\") " pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.557287 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22458b44-1dc4-447a-9e42-d5da68cc0e26-catalog-content\") pod \"redhat-operators-4f422\" (UID: \"22458b44-1dc4-447a-9e42-d5da68cc0e26\") " pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.573541 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9sp8\" (UniqueName: \"kubernetes.io/projected/22458b44-1dc4-447a-9e42-d5da68cc0e26-kube-api-access-s9sp8\") pod \"redhat-operators-4f422\" (UID: \"22458b44-1dc4-447a-9e42-d5da68cc0e26\") " pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.615743 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.864891 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbv6k"] Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.959537 4877 generic.go:334] "Generic (PLEG): container finished" podID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" containerID="777b16c6f2dc90e1fcd9cdcf2fe429dba21de1b17f9316738e2268a0da956b61" exitCode=0 Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.959637 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcwhn" event={"ID":"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d","Type":"ContainerDied","Data":"777b16c6f2dc90e1fcd9cdcf2fe429dba21de1b17f9316738e2268a0da956b61"} Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.962091 4877 generic.go:334] "Generic (PLEG): container finished" podID="d0250f8a-6910-4bd8-a583-f772807319f1" containerID="bed9d2b7b91e2c2c872d12beddc82f7954c682e06341805f7f3430a272591029" exitCode=0 Dec 11 18:04:47 crc kubenswrapper[4877]: I1211 18:04:47.962134 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5924z" event={"ID":"d0250f8a-6910-4bd8-a583-f772807319f1","Type":"ContainerDied","Data":"bed9d2b7b91e2c2c872d12beddc82f7954c682e06341805f7f3430a272591029"} Dec 11 18:04:47 crc kubenswrapper[4877]: W1211 18:04:47.984223 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1003f02_48c3_4729_8720_0e23ffb4b8dd.slice/crio-52241cb54a9dfe80bb06102d88168923e6014fcd779247d7c59d294089c019f5 WatchSource:0}: Error finding container 52241cb54a9dfe80bb06102d88168923e6014fcd779247d7c59d294089c019f5: Status 404 returned error can't find the container with id 52241cb54a9dfe80bb06102d88168923e6014fcd779247d7c59d294089c019f5 Dec 11 18:04:48 crc kubenswrapper[4877]: I1211 18:04:48.066858 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4f422"] Dec 11 18:04:48 crc kubenswrapper[4877]: W1211 18:04:48.075609 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22458b44_1dc4_447a_9e42_d5da68cc0e26.slice/crio-aae2afe922f5190126367f9d2633c1643ec42df83dfe90af7289450d4b83f6e2 WatchSource:0}: Error finding container aae2afe922f5190126367f9d2633c1643ec42df83dfe90af7289450d4b83f6e2: Status 404 returned error can't find the container with id aae2afe922f5190126367f9d2633c1643ec42df83dfe90af7289450d4b83f6e2 Dec 11 18:04:48 crc kubenswrapper[4877]: I1211 18:04:48.970193 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4f422" event={"ID":"22458b44-1dc4-447a-9e42-d5da68cc0e26","Type":"ContainerStarted","Data":"aae2afe922f5190126367f9d2633c1643ec42df83dfe90af7289450d4b83f6e2"} Dec 11 18:04:48 crc kubenswrapper[4877]: I1211 18:04:48.971653 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbv6k" event={"ID":"c1003f02-48c3-4729-8720-0e23ffb4b8dd","Type":"ContainerStarted","Data":"52241cb54a9dfe80bb06102d88168923e6014fcd779247d7c59d294089c019f5"} Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:48.999994 4877 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.001021 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.002430 4877 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.002521 4877 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 18:04:49 crc kubenswrapper[4877]: E1211 18:04:49.002815 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.002842 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 11 18:04:49 crc kubenswrapper[4877]: E1211 18:04:49.002857 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.002870 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 18:04:49 crc kubenswrapper[4877]: E1211 18:04:49.002880 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.002889 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 18:04:49 crc kubenswrapper[4877]: E1211 18:04:49.002906 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.002916 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 18:04:49 crc kubenswrapper[4877]: E1211 18:04:49.002931 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.002939 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 18:04:49 crc kubenswrapper[4877]: E1211 18:04:49.002951 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.002959 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 18:04:49 crc kubenswrapper[4877]: E1211 18:04:49.002970 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.002981 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.003137 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.003152 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.003161 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.003171 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.003183 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.003195 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.005743 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e" gracePeriod=15 Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.006029 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e" gracePeriod=15 Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.006035 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433" gracePeriod=15 Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.006210 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f" gracePeriod=15 Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.006232 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8" gracePeriod=15 Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.053266 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.174147 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.174199 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.174222 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.174276 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.174296 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.174319 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.174347 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.174390 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.220905 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.221517 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.221857 4877 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.222179 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275446 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275486 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275535 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275573 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275655 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275668 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275686 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275722 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275675 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275742 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275783 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275720 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275853 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275890 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275929 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.275965 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.350830 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.977984 4877 generic.go:334] "Generic (PLEG): container finished" podID="e5508650-8a33-447f-bc52-87e7532200d7" containerID="92d9e8e55e212dc8c2c5b8daaa24845a51e48ccd585f36692a11695dff4b1deb" exitCode=0 Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.978207 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e5508650-8a33-447f-bc52-87e7532200d7","Type":"ContainerDied","Data":"92d9e8e55e212dc8c2c5b8daaa24845a51e48ccd585f36692a11695dff4b1deb"} Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.979436 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.979660 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.979818 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.979992 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.980561 4877 generic.go:334] "Generic (PLEG): container finished" podID="22458b44-1dc4-447a-9e42-d5da68cc0e26" containerID="12f77bdd1171e0d265180433a137e33a74dd45124e107302fdece2f1b741cc66" exitCode=0 Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.980615 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4f422" event={"ID":"22458b44-1dc4-447a-9e42-d5da68cc0e26","Type":"ContainerDied","Data":"12f77bdd1171e0d265180433a137e33a74dd45124e107302fdece2f1b741cc66"} Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.981505 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.981717 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.982224 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.982460 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.985607 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.990425 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.991187 4877 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f" exitCode=0 Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.991218 4877 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433" exitCode=0 Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.991227 4877 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8" exitCode=0 Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.991237 4877 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e" exitCode=2 Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.991291 4877 scope.go:117] "RemoveContainer" containerID="58a9232a038be31263af95f91310a8f160e420dac5b510da3f247721c8ff094b" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.993192 4877 generic.go:334] "Generic (PLEG): container finished" podID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" containerID="5bc6720e5278a7a8a528939bbbe00065a89f9cdd1aba7640c0ae368cfdbca3a1" exitCode=0 Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.993235 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbv6k" event={"ID":"c1003f02-48c3-4729-8720-0e23ffb4b8dd","Type":"ContainerDied","Data":"5bc6720e5278a7a8a528939bbbe00065a89f9cdd1aba7640c0ae368cfdbca3a1"} Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.993899 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.994092 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:49 crc kubenswrapper[4877]: I1211 18:04:49.994273 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:50 crc kubenswrapper[4877]: I1211 18:04:50.000691 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f70b362ecacb9ebe36a33fe3f025b3eae0e1cb2f7bc82a00aa1d8dafe1da6328"} Dec 11 18:04:50 crc kubenswrapper[4877]: I1211 18:04:50.000680 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:50 crc kubenswrapper[4877]: I1211 18:04:50.000748 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0941435d2bfc549eb3050f3d17f0d8b989f69f245539ca3e5f76eb376047c830"} Dec 11 18:04:51 crc kubenswrapper[4877]: I1211 18:04:51.020575 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 18:04:51 crc kubenswrapper[4877]: I1211 18:04:51.024116 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:51 crc kubenswrapper[4877]: I1211 18:04:51.024841 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:51 crc kubenswrapper[4877]: I1211 18:04:51.025420 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:51 crc kubenswrapper[4877]: I1211 18:04:51.025941 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.020647 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.021968 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.022214 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.022491 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.024039 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.030469 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e5508650-8a33-447f-bc52-87e7532200d7","Type":"ContainerDied","Data":"3657b83f9ce33a0fc890fe2ffa407d8ec647c4a396238055d5bad25b448de8da"} Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.030512 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3657b83f9ce33a0fc890fe2ffa407d8ec647c4a396238055d5bad25b448de8da" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.030486 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.033257 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.033883 4877 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e" exitCode=0 Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.118729 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5508650-8a33-447f-bc52-87e7532200d7-kubelet-dir\") pod \"e5508650-8a33-447f-bc52-87e7532200d7\" (UID: \"e5508650-8a33-447f-bc52-87e7532200d7\") " Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.126343 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5508650-8a33-447f-bc52-87e7532200d7-var-lock\") pod \"e5508650-8a33-447f-bc52-87e7532200d7\" (UID: \"e5508650-8a33-447f-bc52-87e7532200d7\") " Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.125822 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5508650-8a33-447f-bc52-87e7532200d7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e5508650-8a33-447f-bc52-87e7532200d7" (UID: "e5508650-8a33-447f-bc52-87e7532200d7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.126561 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5508650-8a33-447f-bc52-87e7532200d7-kube-api-access\") pod \"e5508650-8a33-447f-bc52-87e7532200d7\" (UID: \"e5508650-8a33-447f-bc52-87e7532200d7\") " Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.127133 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5508650-8a33-447f-bc52-87e7532200d7-var-lock" (OuterVolumeSpecName: "var-lock") pod "e5508650-8a33-447f-bc52-87e7532200d7" (UID: "e5508650-8a33-447f-bc52-87e7532200d7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.127520 4877 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5508650-8a33-447f-bc52-87e7532200d7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.127541 4877 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5508650-8a33-447f-bc52-87e7532200d7-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.139360 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5508650-8a33-447f-bc52-87e7532200d7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e5508650-8a33-447f-bc52-87e7532200d7" (UID: "e5508650-8a33-447f-bc52-87e7532200d7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.229756 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5508650-8a33-447f-bc52-87e7532200d7-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.347874 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.348271 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.348723 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.349402 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.965031 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.966496 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.967269 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.968037 4877 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.968629 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.968990 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:52 crc kubenswrapper[4877]: I1211 18:04:52.969297 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.043482 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.044365 4877 scope.go:117] "RemoveContainer" containerID="ba48d39ff6f4a4397075c6836c214d1f0d4d5b0201c639034433f14db18cfd4f" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.044528 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.142067 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.142225 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.142288 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.142281 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.142412 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.142487 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.143036 4877 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.143067 4877 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.143078 4877 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.223559 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.239489 4877 scope.go:117] "RemoveContainer" containerID="4060b65e884d6ce430bb3332fd16352e3992c7d759bfbbbbe741eae43be17433" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.265586 4877 scope.go:117] "RemoveContainer" containerID="86cb54848973032a92a1d9893f9c65ae5a75b393c760bdae01b493d94c0e29e8" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.293192 4877 scope.go:117] "RemoveContainer" containerID="2f42a19b2b02abfda9d4e7df89c89ddc35d35e02518ccab0017dbcbd1f4abe3e" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.317224 4877 scope.go:117] "RemoveContainer" containerID="c7e5d4d3d366a459b147641db4e7168dc7d87667774c3700722c84cfca300c8e" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.349561 4877 scope.go:117] "RemoveContainer" containerID="0e563d1d64c1af12020698e62512fa0f4972df6f6ba7dcd495189172f01f2d84" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.349721 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.350160 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.350847 4877 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.351212 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:53 crc kubenswrapper[4877]: I1211 18:04:53.351750 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.052082 4877 generic.go:334] "Generic (PLEG): container finished" podID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" containerID="3b0d5f25b4ca3bc6dcd11c97ee8bb24d2e7d47998d7317aa0f83705ac58b8333" exitCode=0 Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.052179 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbv6k" event={"ID":"c1003f02-48c3-4729-8720-0e23ffb4b8dd","Type":"ContainerDied","Data":"3b0d5f25b4ca3bc6dcd11c97ee8bb24d2e7d47998d7317aa0f83705ac58b8333"} Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.052823 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: E1211 18:04:54.052792 4877 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.103:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-cbv6k.18803b54bc42b5da openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-cbv6k,UID:c1003f02-48c3-4729-8720-0e23ffb4b8dd,APIVersion:v1,ResourceVersion:29432,FieldPath:spec.initContainers{extract-utilities},},Reason:Created,Message:Created container extract-utilities,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 18:04:49.051014618 +0000 UTC m=+250.077258662,LastTimestamp:2025-12-11 18:04:49.051014618 +0000 UTC m=+250.077258662,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.053232 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.053759 4877 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.053992 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.054199 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.056660 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4f422" event={"ID":"22458b44-1dc4-447a-9e42-d5da68cc0e26","Type":"ContainerStarted","Data":"f822955dedd08607ced5c96b445576549f3095dc5dd5c58145466105ee60bbfd"} Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.058161 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.059062 4877 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.059293 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.059516 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.059670 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.061234 4877 generic.go:334] "Generic (PLEG): container finished" podID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" containerID="f669be5780c35422e71e2c14920203ef698971ac179b98749f04176c41459cfd" exitCode=0 Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.061332 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcwhn" event={"ID":"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d","Type":"ContainerDied","Data":"f669be5780c35422e71e2c14920203ef698971ac179b98749f04176c41459cfd"} Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.061805 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.062209 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.062513 4877 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.062889 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.063156 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.063499 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.065086 4877 generic.go:334] "Generic (PLEG): container finished" podID="d0250f8a-6910-4bd8-a583-f772807319f1" containerID="49bf4e02214d963ba6faf23148f6f8184028668e47fe8f538c62ac79cec42587" exitCode=0 Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.065137 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5924z" event={"ID":"d0250f8a-6910-4bd8-a583-f772807319f1","Type":"ContainerDied","Data":"49bf4e02214d963ba6faf23148f6f8184028668e47fe8f538c62ac79cec42587"} Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.066292 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.066681 4877 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.067022 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.069235 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.069681 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.070343 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:54 crc kubenswrapper[4877]: I1211 18:04:54.071128 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:55 crc kubenswrapper[4877]: I1211 18:04:55.084443 4877 generic.go:334] "Generic (PLEG): container finished" podID="22458b44-1dc4-447a-9e42-d5da68cc0e26" containerID="f822955dedd08607ced5c96b445576549f3095dc5dd5c58145466105ee60bbfd" exitCode=0 Dec 11 18:04:55 crc kubenswrapper[4877]: I1211 18:04:55.084523 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4f422" event={"ID":"22458b44-1dc4-447a-9e42-d5da68cc0e26","Type":"ContainerDied","Data":"f822955dedd08607ced5c96b445576549f3095dc5dd5c58145466105ee60bbfd"} Dec 11 18:04:55 crc kubenswrapper[4877]: I1211 18:04:55.085053 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:55 crc kubenswrapper[4877]: I1211 18:04:55.085448 4877 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:55 crc kubenswrapper[4877]: I1211 18:04:55.085947 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:55 crc kubenswrapper[4877]: I1211 18:04:55.086340 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:55 crc kubenswrapper[4877]: I1211 18:04:55.086720 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:55 crc kubenswrapper[4877]: I1211 18:04:55.087030 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:55 crc kubenswrapper[4877]: I1211 18:04:55.087313 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.093957 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5924z" event={"ID":"d0250f8a-6910-4bd8-a583-f772807319f1","Type":"ContainerStarted","Data":"a87558b9198d683147b5fb5501f1b71c1f2a8a59300ee87903e59cc229053a28"} Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.095024 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.096596 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.097021 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.097437 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.098039 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.098639 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.098655 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbv6k" event={"ID":"c1003f02-48c3-4729-8720-0e23ffb4b8dd","Type":"ContainerStarted","Data":"5628b0d6a4947c975dcfac4fe17fcb05f93879c336562386f0be0dc7789e66fb"} Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.099220 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.099660 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.099961 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.100315 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.100806 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.101120 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.101506 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcwhn" event={"ID":"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d","Type":"ContainerStarted","Data":"82f91495fa80b8006bd9a7b4074a11935a8d16c1e45a146fb419920ad2fbc5d9"} Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.102697 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.103103 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.103808 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.104219 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.104582 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:56 crc kubenswrapper[4877]: I1211 18:04:56.104945 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.108385 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4f422" event={"ID":"22458b44-1dc4-447a-9e42-d5da68cc0e26","Type":"ContainerStarted","Data":"5c76c5c60c4d7a4ec2b3af3164a37d54dfef5c785209df096aeb25a5fdbe0e0f"} Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.109516 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.110160 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.110751 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.111086 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.111438 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.111746 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.445767 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.445857 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.493707 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.494560 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.495354 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.496090 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.496469 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.496888 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.497227 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.616529 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:04:57 crc kubenswrapper[4877]: I1211 18:04:57.616591 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:04:58 crc kubenswrapper[4877]: I1211 18:04:58.653357 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4f422" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" containerName="registry-server" probeResult="failure" output=< Dec 11 18:04:58 crc kubenswrapper[4877]: timeout: failed to connect service ":50051" within 1s Dec 11 18:04:58 crc kubenswrapper[4877]: > Dec 11 18:04:58 crc kubenswrapper[4877]: E1211 18:04:58.917365 4877 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:58 crc kubenswrapper[4877]: E1211 18:04:58.917661 4877 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:58 crc kubenswrapper[4877]: E1211 18:04:58.917938 4877 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:58 crc kubenswrapper[4877]: E1211 18:04:58.918332 4877 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:58 crc kubenswrapper[4877]: E1211 18:04:58.919209 4877 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:58 crc kubenswrapper[4877]: I1211 18:04:58.919284 4877 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 11 18:04:58 crc kubenswrapper[4877]: E1211 18:04:58.919892 4877 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="200ms" Dec 11 18:04:59 crc kubenswrapper[4877]: E1211 18:04:59.120518 4877 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="400ms" Dec 11 18:04:59 crc kubenswrapper[4877]: I1211 18:04:59.217446 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:59 crc kubenswrapper[4877]: I1211 18:04:59.217878 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:59 crc kubenswrapper[4877]: I1211 18:04:59.218482 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:59 crc kubenswrapper[4877]: I1211 18:04:59.218776 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:59 crc kubenswrapper[4877]: I1211 18:04:59.219534 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:59 crc kubenswrapper[4877]: I1211 18:04:59.219972 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:04:59 crc kubenswrapper[4877]: E1211 18:04:59.521712 4877 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="800ms" Dec 11 18:05:00 crc kubenswrapper[4877]: E1211 18:05:00.323552 4877 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="1.6s" Dec 11 18:05:01 crc kubenswrapper[4877]: E1211 18:05:01.925759 4877 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="3.2s" Dec 11 18:05:03 crc kubenswrapper[4877]: E1211 18:05:03.472815 4877 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.103:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-cbv6k.18803b54bc42b5da openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-cbv6k,UID:c1003f02-48c3-4729-8720-0e23ffb4b8dd,APIVersion:v1,ResourceVersion:29432,FieldPath:spec.initContainers{extract-utilities},},Reason:Created,Message:Created container extract-utilities,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 18:04:49.051014618 +0000 UTC m=+250.077258662,LastTimestamp:2025-12-11 18:04:49.051014618 +0000 UTC m=+250.077258662,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 18:05:04 crc kubenswrapper[4877]: I1211 18:05:04.122498 4877 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 11 18:05:04 crc kubenswrapper[4877]: I1211 18:05:04.122590 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 11 18:05:04 crc kubenswrapper[4877]: I1211 18:05:04.214523 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:05:04 crc kubenswrapper[4877]: I1211 18:05:04.216011 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:04 crc kubenswrapper[4877]: I1211 18:05:04.216515 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:04 crc kubenswrapper[4877]: I1211 18:05:04.217228 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:04 crc kubenswrapper[4877]: I1211 18:05:04.217917 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:04 crc kubenswrapper[4877]: I1211 18:05:04.218447 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:04 crc kubenswrapper[4877]: I1211 18:05:04.218875 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:04 crc kubenswrapper[4877]: I1211 18:05:04.231492 4877 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="40acfff0-36b4-4de3-a570-498c52cabfa9" Dec 11 18:05:04 crc kubenswrapper[4877]: I1211 18:05:04.231532 4877 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="40acfff0-36b4-4de3-a570-498c52cabfa9" Dec 11 18:05:04 crc kubenswrapper[4877]: E1211 18:05:04.232053 4877 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:05:04 crc kubenswrapper[4877]: I1211 18:05:04.232914 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:05:04 crc kubenswrapper[4877]: I1211 18:05:04.676137 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" podUID="fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" containerName="oauth-openshift" containerID="cri-o://81974eb527b02de4a7795bc767f5f82e3e01b3e795eb78740f8f2e56bb15437b" gracePeriod=15 Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.018101 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5924z" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.018153 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5924z" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.070932 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5924z" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.071748 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.072099 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.072620 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.073167 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.073578 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.073916 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: E1211 18:05:05.127353 4877 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.103:6443: connect: connection refused" interval="6.4s" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.158510 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"551a15c14315cf1368db7dea54fb7721b5614eee88d0c6dfbcd5766f64b9a9e6"} Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.202850 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5924z" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.203924 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.204568 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.204859 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.205162 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.205438 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.206142 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.321875 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.321992 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.383118 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.383860 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.384275 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.384935 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.385654 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.386072 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:05 crc kubenswrapper[4877]: I1211 18:05:05.386654 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:06 crc kubenswrapper[4877]: I1211 18:05:06.207226 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:05:06 crc kubenswrapper[4877]: I1211 18:05:06.208252 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:06 crc kubenswrapper[4877]: I1211 18:05:06.208847 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:06 crc kubenswrapper[4877]: I1211 18:05:06.209238 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:06 crc kubenswrapper[4877]: I1211 18:05:06.209522 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:06 crc kubenswrapper[4877]: I1211 18:05:06.209808 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:06 crc kubenswrapper[4877]: I1211 18:05:06.210182 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.175004 4877 generic.go:334] "Generic (PLEG): container finished" podID="fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" containerID="81974eb527b02de4a7795bc767f5f82e3e01b3e795eb78740f8f2e56bb15437b" exitCode=0 Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.175129 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" event={"ID":"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8","Type":"ContainerDied","Data":"81974eb527b02de4a7795bc767f5f82e3e01b3e795eb78740f8f2e56bb15437b"} Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.177726 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"19d4ac8337e8d25a56095d183a2d5eb8e9fa17e592d8ab6601be527dc94ccc33"} Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.181300 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.181448 4877 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc" exitCode=1 Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.181535 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc"} Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.182462 4877 scope.go:117] "RemoveContainer" containerID="f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.183030 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.183735 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.184235 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.184628 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.185053 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.185353 4877 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.185666 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.496449 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cbv6k" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.497442 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.498207 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.498877 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.499626 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.499915 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.500276 4877 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.500706 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.663126 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.664286 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.665062 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.665489 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.665945 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.666616 4877 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.667179 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.667476 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.703715 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.704783 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.705277 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.705591 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.705887 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.706172 4877 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.706517 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.706894 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.758273 4877 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-kqnqb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.758699 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" podUID="fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Dec 11 18:05:07 crc kubenswrapper[4877]: I1211 18:05:07.878556 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:05:08 crc kubenswrapper[4877]: I1211 18:05:08.190610 4877 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="19d4ac8337e8d25a56095d183a2d5eb8e9fa17e592d8ab6601be527dc94ccc33" exitCode=0 Dec 11 18:05:08 crc kubenswrapper[4877]: I1211 18:05:08.190784 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"19d4ac8337e8d25a56095d183a2d5eb8e9fa17e592d8ab6601be527dc94ccc33"} Dec 11 18:05:08 crc kubenswrapper[4877]: I1211 18:05:08.191013 4877 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="40acfff0-36b4-4de3-a570-498c52cabfa9" Dec 11 18:05:08 crc kubenswrapper[4877]: I1211 18:05:08.191045 4877 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="40acfff0-36b4-4de3-a570-498c52cabfa9" Dec 11 18:05:08 crc kubenswrapper[4877]: E1211 18:05:08.191523 4877 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:05:08 crc kubenswrapper[4877]: I1211 18:05:08.191674 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:08 crc kubenswrapper[4877]: I1211 18:05:08.192465 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:08 crc kubenswrapper[4877]: I1211 18:05:08.193145 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:08 crc kubenswrapper[4877]: I1211 18:05:08.193737 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:08 crc kubenswrapper[4877]: I1211 18:05:08.194104 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:08 crc kubenswrapper[4877]: I1211 18:05:08.194470 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:08 crc kubenswrapper[4877]: I1211 18:05:08.194857 4877 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.201697 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"696b6dc7be9da86e080b76ddcdc2e369a7daefec104711da87e25a0ff9416a29"} Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.206675 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.206760 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b68e75ecdab6744330130229d6a24608f3e4cc85ca89a6ecbdbcc42674636003"} Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.222901 4877 status_manager.go:851] "Failed to get status for pod" podUID="d0250f8a-6910-4bd8-a583-f772807319f1" pod="openshift-marketplace/community-operators-5924z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5924z\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.223549 4877 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.223801 4877 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.224040 4877 status_manager.go:851] "Failed to get status for pod" podUID="c1003f02-48c3-4729-8720-0e23ffb4b8dd" pod="openshift-marketplace/redhat-marketplace-cbv6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbv6k\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.224297 4877 status_manager.go:851] "Failed to get status for pod" podUID="e5508650-8a33-447f-bc52-87e7532200d7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.224575 4877 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.224820 4877 status_manager.go:851] "Failed to get status for pod" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" pod="openshift-marketplace/certified-operators-jcwhn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-jcwhn\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.225061 4877 status_manager.go:851] "Failed to get status for pod" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" pod="openshift-marketplace/redhat-operators-4f422" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4f422\": dial tcp 38.102.83.103:6443: connect: connection refused" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.509796 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.695095 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-audit-policies\") pod \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.695679 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-provider-selection\") pod \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.695733 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-router-certs\") pod \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.695774 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-idp-0-file-data\") pod \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.695811 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-audit-dir\") pod \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.695844 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-login\") pod \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.695877 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-service-ca\") pod \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.695908 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9whj5\" (UniqueName: \"kubernetes.io/projected/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-kube-api-access-9whj5\") pod \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.695942 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-ocp-branding-template\") pod \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.695969 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-trusted-ca-bundle\") pod \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.696039 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-serving-cert\") pod \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.696063 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-session\") pod \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.696087 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-cliconfig\") pod \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.696121 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-error\") pod \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\" (UID: \"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8\") " Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.696520 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" (UID: "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.696953 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" (UID: "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.697550 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" (UID: "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.698954 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" (UID: "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.702337 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" (UID: "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.742470 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-kube-api-access-9whj5" (OuterVolumeSpecName: "kube-api-access-9whj5") pod "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" (UID: "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8"). InnerVolumeSpecName "kube-api-access-9whj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.742648 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" (UID: "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.743159 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" (UID: "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.743651 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" (UID: "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.745063 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" (UID: "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.745733 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" (UID: "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.745976 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" (UID: "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.746324 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" (UID: "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.746964 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" (UID: "fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.797943 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.797993 4877 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.798007 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.798018 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.798032 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.798045 4877 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.798053 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.798063 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.798072 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9whj5\" (UniqueName: \"kubernetes.io/projected/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-kube-api-access-9whj5\") on node \"crc\" DevicePath \"\"" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.798088 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.798101 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.798112 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.798124 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 18:05:09 crc kubenswrapper[4877]: I1211 18:05:09.798145 4877 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 18:05:10 crc kubenswrapper[4877]: I1211 18:05:10.215443 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6058f8905092348c81499ffcacfd42e72850775dc257cd68bc0c3cb123d5e99f"} Dec 11 18:05:10 crc kubenswrapper[4877]: I1211 18:05:10.215506 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4c92e91ea2c64067dd1c2018f47ac42a2897d0a1e20759e56abaec5e98b92ed1"} Dec 11 18:05:10 crc kubenswrapper[4877]: I1211 18:05:10.217141 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" event={"ID":"fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8","Type":"ContainerDied","Data":"35576680f1780ebccc8e7e25b046f2a992774a6fcd101e70caf0514b403b45ca"} Dec 11 18:05:10 crc kubenswrapper[4877]: I1211 18:05:10.217192 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kqnqb" Dec 11 18:05:10 crc kubenswrapper[4877]: I1211 18:05:10.217222 4877 scope.go:117] "RemoveContainer" containerID="81974eb527b02de4a7795bc767f5f82e3e01b3e795eb78740f8f2e56bb15437b" Dec 11 18:05:10 crc kubenswrapper[4877]: I1211 18:05:10.932143 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:05:10 crc kubenswrapper[4877]: I1211 18:05:10.932603 4877 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 11 18:05:10 crc kubenswrapper[4877]: I1211 18:05:10.932782 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 11 18:05:11 crc kubenswrapper[4877]: I1211 18:05:11.228909 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3b33aa6084f0c2f0d8c6cef6462cd2191d3dff7a97bd75d5a0caa0cab1f18559"} Dec 11 18:05:11 crc kubenswrapper[4877]: I1211 18:05:11.228976 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c729b42bac41d5676fa83b6f74c0d4e4e3e38cd5586e499a32e0dd166d7ca4ea"} Dec 11 18:05:11 crc kubenswrapper[4877]: I1211 18:05:11.229334 4877 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="40acfff0-36b4-4de3-a570-498c52cabfa9" Dec 11 18:05:11 crc kubenswrapper[4877]: I1211 18:05:11.229390 4877 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="40acfff0-36b4-4de3-a570-498c52cabfa9" Dec 11 18:05:14 crc kubenswrapper[4877]: I1211 18:05:14.233651 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:05:14 crc kubenswrapper[4877]: I1211 18:05:14.234109 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:05:14 crc kubenswrapper[4877]: I1211 18:05:14.234127 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:05:14 crc kubenswrapper[4877]: I1211 18:05:14.240489 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:05:16 crc kubenswrapper[4877]: I1211 18:05:16.240496 4877 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:05:17 crc kubenswrapper[4877]: I1211 18:05:17.266516 4877 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="40acfff0-36b4-4de3-a570-498c52cabfa9" Dec 11 18:05:17 crc kubenswrapper[4877]: I1211 18:05:17.266552 4877 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="40acfff0-36b4-4de3-a570-498c52cabfa9" Dec 11 18:05:17 crc kubenswrapper[4877]: I1211 18:05:17.270869 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:05:17 crc kubenswrapper[4877]: I1211 18:05:17.274470 4877 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="23dea282-995f-4005-aaf3-8ace0bf8abbb" Dec 11 18:05:17 crc kubenswrapper[4877]: I1211 18:05:17.878445 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:05:18 crc kubenswrapper[4877]: I1211 18:05:18.288341 4877 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="40acfff0-36b4-4de3-a570-498c52cabfa9" Dec 11 18:05:18 crc kubenswrapper[4877]: I1211 18:05:18.288422 4877 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="40acfff0-36b4-4de3-a570-498c52cabfa9" Dec 11 18:05:19 crc kubenswrapper[4877]: I1211 18:05:19.242990 4877 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="23dea282-995f-4005-aaf3-8ace0bf8abbb" Dec 11 18:05:20 crc kubenswrapper[4877]: I1211 18:05:20.932368 4877 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 11 18:05:20 crc kubenswrapper[4877]: I1211 18:05:20.932489 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 11 18:05:25 crc kubenswrapper[4877]: I1211 18:05:25.586938 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 11 18:05:26 crc kubenswrapper[4877]: I1211 18:05:26.419291 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 11 18:05:27 crc kubenswrapper[4877]: I1211 18:05:27.351573 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 11 18:05:27 crc kubenswrapper[4877]: I1211 18:05:27.362540 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 11 18:05:27 crc kubenswrapper[4877]: I1211 18:05:27.475586 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 11 18:05:27 crc kubenswrapper[4877]: I1211 18:05:27.828279 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 11 18:05:28 crc kubenswrapper[4877]: I1211 18:05:28.364773 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 11 18:05:28 crc kubenswrapper[4877]: I1211 18:05:28.459829 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 11 18:05:28 crc kubenswrapper[4877]: I1211 18:05:28.535781 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 11 18:05:28 crc kubenswrapper[4877]: I1211 18:05:28.945248 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 18:05:28 crc kubenswrapper[4877]: I1211 18:05:28.988560 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 11 18:05:29 crc kubenswrapper[4877]: I1211 18:05:29.007574 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 11 18:05:29 crc kubenswrapper[4877]: I1211 18:05:29.011205 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 11 18:05:29 crc kubenswrapper[4877]: I1211 18:05:29.093530 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 11 18:05:29 crc kubenswrapper[4877]: I1211 18:05:29.106333 4877 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 11 18:05:29 crc kubenswrapper[4877]: I1211 18:05:29.133398 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 11 18:05:29 crc kubenswrapper[4877]: I1211 18:05:29.237750 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 11 18:05:29 crc kubenswrapper[4877]: I1211 18:05:29.371931 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 11 18:05:29 crc kubenswrapper[4877]: I1211 18:05:29.375634 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 11 18:05:29 crc kubenswrapper[4877]: I1211 18:05:29.516736 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 11 18:05:29 crc kubenswrapper[4877]: I1211 18:05:29.557641 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 11 18:05:29 crc kubenswrapper[4877]: I1211 18:05:29.761502 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 11 18:05:29 crc kubenswrapper[4877]: I1211 18:05:29.846612 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 11 18:05:29 crc kubenswrapper[4877]: I1211 18:05:29.868252 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.038412 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.072131 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.366566 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.551260 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.570737 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.625293 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.661284 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.762848 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.812636 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.932353 4877 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.932483 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.932570 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.933532 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"b68e75ecdab6744330130229d6a24608f3e4cc85ca89a6ecbdbcc42674636003"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.933753 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://b68e75ecdab6744330130229d6a24608f3e4cc85ca89a6ecbdbcc42674636003" gracePeriod=30 Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.958117 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 18:05:30 crc kubenswrapper[4877]: I1211 18:05:30.958988 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 11 18:05:31 crc kubenswrapper[4877]: I1211 18:05:31.140397 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 11 18:05:31 crc kubenswrapper[4877]: I1211 18:05:31.238320 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 11 18:05:31 crc kubenswrapper[4877]: I1211 18:05:31.280455 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 11 18:05:31 crc kubenswrapper[4877]: I1211 18:05:31.599895 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 11 18:05:31 crc kubenswrapper[4877]: I1211 18:05:31.625210 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 11 18:05:31 crc kubenswrapper[4877]: I1211 18:05:31.939350 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 11 18:05:31 crc kubenswrapper[4877]: I1211 18:05:31.941887 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 18:05:32 crc kubenswrapper[4877]: I1211 18:05:32.230355 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 18:05:32 crc kubenswrapper[4877]: I1211 18:05:32.510950 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 11 18:05:32 crc kubenswrapper[4877]: I1211 18:05:32.670315 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 11 18:05:33 crc kubenswrapper[4877]: I1211 18:05:33.318702 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 11 18:05:33 crc kubenswrapper[4877]: I1211 18:05:33.361656 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 11 18:05:33 crc kubenswrapper[4877]: I1211 18:05:33.379651 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 11 18:05:33 crc kubenswrapper[4877]: I1211 18:05:33.465962 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 11 18:05:33 crc kubenswrapper[4877]: I1211 18:05:33.555854 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 11 18:05:33 crc kubenswrapper[4877]: I1211 18:05:33.585485 4877 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 11 18:05:33 crc kubenswrapper[4877]: I1211 18:05:33.602508 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 11 18:05:33 crc kubenswrapper[4877]: I1211 18:05:33.618857 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 11 18:05:33 crc kubenswrapper[4877]: I1211 18:05:33.711223 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 11 18:05:33 crc kubenswrapper[4877]: I1211 18:05:33.840219 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 11 18:05:33 crc kubenswrapper[4877]: I1211 18:05:33.922714 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 11 18:05:34 crc kubenswrapper[4877]: I1211 18:05:34.093464 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 11 18:05:34 crc kubenswrapper[4877]: I1211 18:05:34.558800 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 11 18:05:34 crc kubenswrapper[4877]: I1211 18:05:34.655714 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 11 18:05:34 crc kubenswrapper[4877]: I1211 18:05:34.897239 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 11 18:05:34 crc kubenswrapper[4877]: I1211 18:05:34.972995 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 11 18:05:35 crc kubenswrapper[4877]: I1211 18:05:35.206466 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 11 18:05:35 crc kubenswrapper[4877]: I1211 18:05:35.312296 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 11 18:05:35 crc kubenswrapper[4877]: I1211 18:05:35.352308 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 11 18:05:35 crc kubenswrapper[4877]: I1211 18:05:35.367923 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 11 18:05:35 crc kubenswrapper[4877]: I1211 18:05:35.418219 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 11 18:05:35 crc kubenswrapper[4877]: I1211 18:05:35.711122 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 11 18:05:35 crc kubenswrapper[4877]: I1211 18:05:35.952973 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 11 18:05:36 crc kubenswrapper[4877]: I1211 18:05:36.347962 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 18:05:36 crc kubenswrapper[4877]: I1211 18:05:36.404163 4877 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 11 18:05:36 crc kubenswrapper[4877]: I1211 18:05:36.472082 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 11 18:05:36 crc kubenswrapper[4877]: I1211 18:05:36.524166 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 11 18:05:36 crc kubenswrapper[4877]: I1211 18:05:36.694586 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 11 18:05:36 crc kubenswrapper[4877]: I1211 18:05:36.926225 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 11 18:05:37 crc kubenswrapper[4877]: I1211 18:05:37.142007 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 11 18:05:37 crc kubenswrapper[4877]: I1211 18:05:37.153884 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 11 18:05:37 crc kubenswrapper[4877]: I1211 18:05:37.374928 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 11 18:05:37 crc kubenswrapper[4877]: I1211 18:05:37.397979 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 11 18:05:37 crc kubenswrapper[4877]: I1211 18:05:37.570779 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 11 18:05:38 crc kubenswrapper[4877]: I1211 18:05:38.047111 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 11 18:05:38 crc kubenswrapper[4877]: I1211 18:05:38.180898 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 11 18:05:38 crc kubenswrapper[4877]: I1211 18:05:38.237112 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 11 18:05:38 crc kubenswrapper[4877]: I1211 18:05:38.347818 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 11 18:05:38 crc kubenswrapper[4877]: I1211 18:05:38.463727 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 11 18:05:38 crc kubenswrapper[4877]: I1211 18:05:38.480575 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 11 18:05:38 crc kubenswrapper[4877]: I1211 18:05:38.540105 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 11 18:05:38 crc kubenswrapper[4877]: I1211 18:05:38.713279 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 18:05:38 crc kubenswrapper[4877]: I1211 18:05:38.746113 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 18:05:38 crc kubenswrapper[4877]: I1211 18:05:38.749969 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 18:05:38 crc kubenswrapper[4877]: I1211 18:05:38.900730 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 11 18:05:38 crc kubenswrapper[4877]: I1211 18:05:38.918879 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 11 18:05:39 crc kubenswrapper[4877]: I1211 18:05:39.053057 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 11 18:05:39 crc kubenswrapper[4877]: I1211 18:05:39.096577 4877 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 11 18:05:39 crc kubenswrapper[4877]: I1211 18:05:39.120155 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 11 18:05:39 crc kubenswrapper[4877]: I1211 18:05:39.168113 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 11 18:05:39 crc kubenswrapper[4877]: I1211 18:05:39.276465 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 11 18:05:39 crc kubenswrapper[4877]: I1211 18:05:39.288355 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 11 18:05:39 crc kubenswrapper[4877]: I1211 18:05:39.508484 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 11 18:05:39 crc kubenswrapper[4877]: I1211 18:05:39.616141 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 11 18:05:39 crc kubenswrapper[4877]: I1211 18:05:39.859503 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 18:05:39 crc kubenswrapper[4877]: I1211 18:05:39.937600 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 11 18:05:39 crc kubenswrapper[4877]: I1211 18:05:39.962138 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 11 18:05:39 crc kubenswrapper[4877]: I1211 18:05:39.977138 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 11 18:05:40 crc kubenswrapper[4877]: I1211 18:05:40.085595 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 11 18:05:40 crc kubenswrapper[4877]: I1211 18:05:40.218582 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 11 18:05:40 crc kubenswrapper[4877]: I1211 18:05:40.239690 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 11 18:05:40 crc kubenswrapper[4877]: I1211 18:05:40.257013 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 11 18:05:40 crc kubenswrapper[4877]: I1211 18:05:40.312168 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 11 18:05:40 crc kubenswrapper[4877]: I1211 18:05:40.675354 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 11 18:05:40 crc kubenswrapper[4877]: I1211 18:05:40.698203 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 11 18:05:40 crc kubenswrapper[4877]: I1211 18:05:40.733624 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 11 18:05:41 crc kubenswrapper[4877]: I1211 18:05:41.082052 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 11 18:05:41 crc kubenswrapper[4877]: I1211 18:05:41.162863 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 11 18:05:41 crc kubenswrapper[4877]: I1211 18:05:41.186699 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 11 18:05:41 crc kubenswrapper[4877]: I1211 18:05:41.389079 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 11 18:05:41 crc kubenswrapper[4877]: I1211 18:05:41.607728 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 11 18:05:48 crc kubenswrapper[4877]: I1211 18:05:48.077881 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 11 18:05:48 crc kubenswrapper[4877]: I1211 18:05:48.269236 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 11 18:05:48 crc kubenswrapper[4877]: I1211 18:05:48.274469 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 11 18:05:49 crc kubenswrapper[4877]: I1211 18:05:49.524537 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 11 18:05:49 crc kubenswrapper[4877]: I1211 18:05:49.589353 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 11 18:05:49 crc kubenswrapper[4877]: I1211 18:05:49.866175 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 11 18:05:50 crc kubenswrapper[4877]: I1211 18:05:50.955974 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 11 18:05:51 crc kubenswrapper[4877]: I1211 18:05:51.274586 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 11 18:05:51 crc kubenswrapper[4877]: I1211 18:05:51.469824 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 11 18:05:51 crc kubenswrapper[4877]: I1211 18:05:51.475938 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 11 18:05:51 crc kubenswrapper[4877]: I1211 18:05:51.819411 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 18:05:51 crc kubenswrapper[4877]: I1211 18:05:51.926272 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 11 18:05:52 crc kubenswrapper[4877]: I1211 18:05:52.008637 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 18:05:52 crc kubenswrapper[4877]: I1211 18:05:52.325162 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 18:05:52 crc kubenswrapper[4877]: I1211 18:05:52.707957 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 11 18:05:53 crc kubenswrapper[4877]: I1211 18:05:53.041118 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 11 18:05:53 crc kubenswrapper[4877]: I1211 18:05:53.321392 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 11 18:05:53 crc kubenswrapper[4877]: I1211 18:05:53.450714 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 11 18:05:53 crc kubenswrapper[4877]: I1211 18:05:53.471048 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 11 18:05:53 crc kubenswrapper[4877]: I1211 18:05:53.524490 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 11 18:05:54 crc kubenswrapper[4877]: I1211 18:05:54.071877 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 11 18:05:55 crc kubenswrapper[4877]: I1211 18:05:55.124481 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 11 18:05:55 crc kubenswrapper[4877]: I1211 18:05:55.144222 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 11 18:05:55 crc kubenswrapper[4877]: I1211 18:05:55.528548 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 11 18:05:55 crc kubenswrapper[4877]: I1211 18:05:55.700179 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 11 18:05:55 crc kubenswrapper[4877]: I1211 18:05:55.726748 4877 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 11 18:05:55 crc kubenswrapper[4877]: I1211 18:05:55.794394 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 11 18:05:55 crc kubenswrapper[4877]: I1211 18:05:55.915030 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 11 18:05:56 crc kubenswrapper[4877]: I1211 18:05:56.284705 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 11 18:05:56 crc kubenswrapper[4877]: I1211 18:05:56.443912 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 11 18:05:56 crc kubenswrapper[4877]: I1211 18:05:56.818123 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 11 18:05:57 crc kubenswrapper[4877]: I1211 18:05:57.028614 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 11 18:05:57 crc kubenswrapper[4877]: I1211 18:05:57.293952 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 18:05:57 crc kubenswrapper[4877]: I1211 18:05:57.306747 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 18:05:57 crc kubenswrapper[4877]: I1211 18:05:57.788429 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 11 18:05:58 crc kubenswrapper[4877]: I1211 18:05:58.260282 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 11 18:05:58 crc kubenswrapper[4877]: I1211 18:05:58.393175 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 11 18:05:58 crc kubenswrapper[4877]: I1211 18:05:58.506958 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 11 18:05:58 crc kubenswrapper[4877]: I1211 18:05:58.639796 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 11 18:05:58 crc kubenswrapper[4877]: I1211 18:05:58.697635 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 18:05:58 crc kubenswrapper[4877]: I1211 18:05:58.850238 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 18:05:58 crc kubenswrapper[4877]: I1211 18:05:58.917341 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 11 18:05:59 crc kubenswrapper[4877]: I1211 18:05:59.849530 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 11 18:05:59 crc kubenswrapper[4877]: I1211 18:05:59.878137 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 11 18:06:00 crc kubenswrapper[4877]: I1211 18:06:00.103037 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 18:06:00 crc kubenswrapper[4877]: I1211 18:06:00.134586 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 11 18:06:00 crc kubenswrapper[4877]: I1211 18:06:00.176354 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 11 18:06:00 crc kubenswrapper[4877]: I1211 18:06:00.775307 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 11 18:06:00 crc kubenswrapper[4877]: I1211 18:06:00.874625 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 18:06:01 crc kubenswrapper[4877]: I1211 18:06:01.408505 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 11 18:06:01 crc kubenswrapper[4877]: I1211 18:06:01.441208 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 11 18:06:01 crc kubenswrapper[4877]: I1211 18:06:01.475218 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 11 18:06:01 crc kubenswrapper[4877]: I1211 18:06:01.587127 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 11 18:06:01 crc kubenswrapper[4877]: I1211 18:06:01.589843 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 11 18:06:01 crc kubenswrapper[4877]: I1211 18:06:01.589920 4877 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b68e75ecdab6744330130229d6a24608f3e4cc85ca89a6ecbdbcc42674636003" exitCode=137 Dec 11 18:06:01 crc kubenswrapper[4877]: I1211 18:06:01.589986 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b68e75ecdab6744330130229d6a24608f3e4cc85ca89a6ecbdbcc42674636003"} Dec 11 18:06:01 crc kubenswrapper[4877]: I1211 18:06:01.590029 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9d162af0a7ed9fca7862a7e245dc95b2c0f8fa7834279d2346c573dd326c3817"} Dec 11 18:06:01 crc kubenswrapper[4877]: I1211 18:06:01.590055 4877 scope.go:117] "RemoveContainer" containerID="f054a8f8d848e9365c76aba436bb3a3d1f4a9afba3b9716a93ec61af40180ccc" Dec 11 18:06:01 crc kubenswrapper[4877]: I1211 18:06:01.812656 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 11 18:06:01 crc kubenswrapper[4877]: I1211 18:06:01.898966 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 11 18:06:01 crc kubenswrapper[4877]: I1211 18:06:01.986992 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 11 18:06:02 crc kubenswrapper[4877]: I1211 18:06:02.043769 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 11 18:06:02 crc kubenswrapper[4877]: I1211 18:06:02.268692 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 11 18:06:02 crc kubenswrapper[4877]: I1211 18:06:02.393709 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 11 18:06:02 crc kubenswrapper[4877]: I1211 18:06:02.597579 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 11 18:06:02 crc kubenswrapper[4877]: I1211 18:06:02.969758 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 18:06:02 crc kubenswrapper[4877]: I1211 18:06:02.982477 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 11 18:06:03 crc kubenswrapper[4877]: I1211 18:06:03.132858 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 11 18:06:03 crc kubenswrapper[4877]: I1211 18:06:03.168524 4877 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 11 18:06:03 crc kubenswrapper[4877]: I1211 18:06:03.444172 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 11 18:06:03 crc kubenswrapper[4877]: I1211 18:06:03.642280 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 11 18:06:03 crc kubenswrapper[4877]: I1211 18:06:03.777468 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 11 18:06:03 crc kubenswrapper[4877]: I1211 18:06:03.849132 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 11 18:06:04 crc kubenswrapper[4877]: I1211 18:06:04.920346 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 11 18:06:04 crc kubenswrapper[4877]: I1211 18:06:04.924542 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 11 18:06:04 crc kubenswrapper[4877]: I1211 18:06:04.970608 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 11 18:06:05 crc kubenswrapper[4877]: I1211 18:06:05.040339 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 11 18:06:05 crc kubenswrapper[4877]: I1211 18:06:05.147772 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 11 18:06:05 crc kubenswrapper[4877]: I1211 18:06:05.515135 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 11 18:06:05 crc kubenswrapper[4877]: I1211 18:06:05.850011 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 11 18:06:05 crc kubenswrapper[4877]: I1211 18:06:05.863753 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.033343 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.402039 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.504212 4877 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.506509 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5924z" podStartSLOduration=74.983567042 podStartE2EDuration="1m22.506482486s" podCreationTimestamp="2025-12-11 18:04:44 +0000 UTC" firstStartedPulling="2025-12-11 18:04:47.963438018 +0000 UTC m=+248.989682062" lastFinishedPulling="2025-12-11 18:04:55.486353462 +0000 UTC m=+256.512597506" observedRunningTime="2025-12-11 18:05:15.700356097 +0000 UTC m=+276.726600151" watchObservedRunningTime="2025-12-11 18:06:06.506482486 +0000 UTC m=+327.532726560" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.506661 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cbv6k" podStartSLOduration=73.967331061 podStartE2EDuration="1m19.506656121s" podCreationTimestamp="2025-12-11 18:04:47 +0000 UTC" firstStartedPulling="2025-12-11 18:04:50.000665688 +0000 UTC m=+251.026909742" lastFinishedPulling="2025-12-11 18:04:55.539990758 +0000 UTC m=+256.566234802" observedRunningTime="2025-12-11 18:05:15.778262506 +0000 UTC m=+276.804506560" watchObservedRunningTime="2025-12-11 18:06:06.506656121 +0000 UTC m=+327.532900185" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.507122 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4f422" podStartSLOduration=73.345688859 podStartE2EDuration="1m19.507114115s" podCreationTimestamp="2025-12-11 18:04:47 +0000 UTC" firstStartedPulling="2025-12-11 18:04:49.983204245 +0000 UTC m=+251.009448289" lastFinishedPulling="2025-12-11 18:04:56.144629501 +0000 UTC m=+257.170873545" observedRunningTime="2025-12-11 18:05:15.675880158 +0000 UTC m=+276.702124232" watchObservedRunningTime="2025-12-11 18:06:06.507114115 +0000 UTC m=+327.533358189" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.509991 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=77.509983041 podStartE2EDuration="1m17.509983041s" podCreationTimestamp="2025-12-11 18:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:05:15.747714758 +0000 UTC m=+276.773958802" watchObservedRunningTime="2025-12-11 18:06:06.509983041 +0000 UTC m=+327.536227095" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.510109 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jcwhn" podStartSLOduration=74.861377051 podStartE2EDuration="1m22.510104524s" podCreationTimestamp="2025-12-11 18:04:44 +0000 UTC" firstStartedPulling="2025-12-11 18:04:47.961419262 +0000 UTC m=+248.987663306" lastFinishedPulling="2025-12-11 18:04:55.610146735 +0000 UTC m=+256.636390779" observedRunningTime="2025-12-11 18:05:15.854936518 +0000 UTC m=+276.881180562" watchObservedRunningTime="2025-12-11 18:06:06.510104524 +0000 UTC m=+327.536348598" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.511734 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kqnqb","openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.511799 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6fb796c88-7b6rs","openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 18:06:06 crc kubenswrapper[4877]: E1211 18:06:06.512049 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" containerName="oauth-openshift" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.512074 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" containerName="oauth-openshift" Dec 11 18:06:06 crc kubenswrapper[4877]: E1211 18:06:06.512091 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5508650-8a33-447f-bc52-87e7532200d7" containerName="installer" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.512100 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5508650-8a33-447f-bc52-87e7532200d7" containerName="installer" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.512258 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" containerName="oauth-openshift" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.512273 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5508650-8a33-447f-bc52-87e7532200d7" containerName="installer" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.512579 4877 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="40acfff0-36b4-4de3-a570-498c52cabfa9" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.512614 4877 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="40acfff0-36b4-4de3-a570-498c52cabfa9" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.513005 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.520284 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.520536 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.520583 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.521093 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.521343 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.521278 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.523733 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.525517 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.525709 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.526576 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.526549 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.526829 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.527026 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.537057 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.539912 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.550580 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.577214 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=50.577184896 podStartE2EDuration="50.577184896s" podCreationTimestamp="2025-12-11 18:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:06:06.572483515 +0000 UTC m=+327.598727579" watchObservedRunningTime="2025-12-11 18:06:06.577184896 +0000 UTC m=+327.603428950" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.678431 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.678489 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.678520 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.678557 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.678581 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b5744078-060f-41c5-9af2-45078cebcfdc-audit-policies\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.678616 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5744078-060f-41c5-9af2-45078cebcfdc-audit-dir\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.678647 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.678680 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-user-template-error\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.678712 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-session\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.678733 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-user-template-login\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.678762 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wpnk\" (UniqueName: \"kubernetes.io/projected/b5744078-060f-41c5-9af2-45078cebcfdc-kube-api-access-7wpnk\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.678787 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.678812 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.678844 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.694325 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.779805 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-user-template-error\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.779908 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-session\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.779958 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-user-template-login\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.780007 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.780042 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wpnk\" (UniqueName: \"kubernetes.io/projected/b5744078-060f-41c5-9af2-45078cebcfdc-kube-api-access-7wpnk\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.780079 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.780110 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.780153 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.780199 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.780235 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.780286 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.780321 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b5744078-060f-41c5-9af2-45078cebcfdc-audit-policies\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.780422 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5744078-060f-41c5-9af2-45078cebcfdc-audit-dir\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.780473 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.781878 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.781959 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.781987 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5744078-060f-41c5-9af2-45078cebcfdc-audit-dir\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.782040 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b5744078-060f-41c5-9af2-45078cebcfdc-audit-policies\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.782320 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.789594 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.789754 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.789782 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-session\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.790606 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.791320 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-user-template-error\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.791323 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.792145 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-user-template-login\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.797257 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b5744078-060f-41c5-9af2-45078cebcfdc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.803511 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wpnk\" (UniqueName: \"kubernetes.io/projected/b5744078-060f-41c5-9af2-45078cebcfdc-kube-api-access-7wpnk\") pod \"oauth-openshift-6fb796c88-7b6rs\" (UID: \"b5744078-060f-41c5-9af2-45078cebcfdc\") " pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.843552 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:06 crc kubenswrapper[4877]: I1211 18:06:06.962653 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 11 18:06:07 crc kubenswrapper[4877]: I1211 18:06:07.226091 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8" path="/var/lib/kubelet/pods/fd0a5aa8-ffcb-4186-9f31-3e988bb8c6c8/volumes" Dec 11 18:06:07 crc kubenswrapper[4877]: I1211 18:06:07.551038 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 18:06:07 crc kubenswrapper[4877]: I1211 18:06:07.580112 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 18:06:07 crc kubenswrapper[4877]: I1211 18:06:07.878321 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:06:08 crc kubenswrapper[4877]: I1211 18:06:08.078524 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 11 18:06:08 crc kubenswrapper[4877]: I1211 18:06:08.434119 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 11 18:06:08 crc kubenswrapper[4877]: I1211 18:06:08.877626 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 11 18:06:08 crc kubenswrapper[4877]: I1211 18:06:08.905721 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 11 18:06:09 crc kubenswrapper[4877]: I1211 18:06:09.441899 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 11 18:06:10 crc kubenswrapper[4877]: I1211 18:06:10.213433 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 11 18:06:10 crc kubenswrapper[4877]: I1211 18:06:10.226707 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 11 18:06:10 crc kubenswrapper[4877]: I1211 18:06:10.441915 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 11 18:06:10 crc kubenswrapper[4877]: I1211 18:06:10.932183 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:06:10 crc kubenswrapper[4877]: I1211 18:06:10.937230 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:06:11 crc kubenswrapper[4877]: I1211 18:06:11.336816 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 11 18:06:11 crc kubenswrapper[4877]: I1211 18:06:11.657226 4877 generic.go:334] "Generic (PLEG): container finished" podID="1eaf037c-b9a9-4c1b-b108-0ffcad610322" containerID="496a5ae05d3707ec86019c7b43cbdb883eaebf289e966f79ee533667c21a8720" exitCode=0 Dec 11 18:06:11 crc kubenswrapper[4877]: I1211 18:06:11.657298 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" event={"ID":"1eaf037c-b9a9-4c1b-b108-0ffcad610322","Type":"ContainerDied","Data":"496a5ae05d3707ec86019c7b43cbdb883eaebf289e966f79ee533667c21a8720"} Dec 11 18:06:11 crc kubenswrapper[4877]: I1211 18:06:11.658735 4877 scope.go:117] "RemoveContainer" containerID="496a5ae05d3707ec86019c7b43cbdb883eaebf289e966f79ee533667c21a8720" Dec 11 18:06:11 crc kubenswrapper[4877]: I1211 18:06:11.667370 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 18:06:11 crc kubenswrapper[4877]: I1211 18:06:11.741040 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 11 18:06:12 crc kubenswrapper[4877]: I1211 18:06:12.018619 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 11 18:06:12 crc kubenswrapper[4877]: I1211 18:06:12.051143 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 11 18:06:12 crc kubenswrapper[4877]: I1211 18:06:12.354349 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 11 18:06:12 crc kubenswrapper[4877]: I1211 18:06:12.437958 4877 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 18:06:12 crc kubenswrapper[4877]: I1211 18:06:12.438332 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://f70b362ecacb9ebe36a33fe3f025b3eae0e1cb2f7bc82a00aa1d8dafe1da6328" gracePeriod=5 Dec 11 18:06:12 crc kubenswrapper[4877]: I1211 18:06:12.667836 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4qvdz_1eaf037c-b9a9-4c1b-b108-0ffcad610322/marketplace-operator/1.log" Dec 11 18:06:12 crc kubenswrapper[4877]: I1211 18:06:12.668427 4877 generic.go:334] "Generic (PLEG): container finished" podID="1eaf037c-b9a9-4c1b-b108-0ffcad610322" containerID="1f751b690292df66d800fdaad04085503d1e16a37eca6163b99a75c103b07a03" exitCode=1 Dec 11 18:06:12 crc kubenswrapper[4877]: I1211 18:06:12.668472 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" event={"ID":"1eaf037c-b9a9-4c1b-b108-0ffcad610322","Type":"ContainerDied","Data":"1f751b690292df66d800fdaad04085503d1e16a37eca6163b99a75c103b07a03"} Dec 11 18:06:12 crc kubenswrapper[4877]: I1211 18:06:12.668544 4877 scope.go:117] "RemoveContainer" containerID="496a5ae05d3707ec86019c7b43cbdb883eaebf289e966f79ee533667c21a8720" Dec 11 18:06:12 crc kubenswrapper[4877]: I1211 18:06:12.669551 4877 scope.go:117] "RemoveContainer" containerID="1f751b690292df66d800fdaad04085503d1e16a37eca6163b99a75c103b07a03" Dec 11 18:06:12 crc kubenswrapper[4877]: E1211 18:06:12.669907 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-4qvdz_openshift-marketplace(1eaf037c-b9a9-4c1b-b108-0ffcad610322)\"" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" podUID="1eaf037c-b9a9-4c1b-b108-0ffcad610322" Dec 11 18:06:12 crc kubenswrapper[4877]: I1211 18:06:12.861464 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 11 18:06:13 crc kubenswrapper[4877]: I1211 18:06:13.072657 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 11 18:06:13 crc kubenswrapper[4877]: I1211 18:06:13.421099 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 11 18:06:13 crc kubenswrapper[4877]: I1211 18:06:13.453622 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 11 18:06:13 crc kubenswrapper[4877]: I1211 18:06:13.680332 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4qvdz_1eaf037c-b9a9-4c1b-b108-0ffcad610322/marketplace-operator/1.log" Dec 11 18:06:14 crc kubenswrapper[4877]: I1211 18:06:14.011594 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 11 18:06:14 crc kubenswrapper[4877]: I1211 18:06:14.080968 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 11 18:06:14 crc kubenswrapper[4877]: I1211 18:06:14.127958 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 11 18:06:14 crc kubenswrapper[4877]: I1211 18:06:14.386452 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 18:06:14 crc kubenswrapper[4877]: I1211 18:06:14.403221 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fb796c88-7b6rs"] Dec 11 18:06:14 crc kubenswrapper[4877]: I1211 18:06:14.516533 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 11 18:06:14 crc kubenswrapper[4877]: I1211 18:06:14.587618 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 11 18:06:14 crc kubenswrapper[4877]: I1211 18:06:14.601618 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fb796c88-7b6rs"] Dec 11 18:06:14 crc kubenswrapper[4877]: I1211 18:06:14.687858 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" event={"ID":"b5744078-060f-41c5-9af2-45078cebcfdc","Type":"ContainerStarted","Data":"cc0f185b122938788da6bee4b3e1daf59847588e47400b8b5ec1a3b76df73a1d"} Dec 11 18:06:15 crc kubenswrapper[4877]: I1211 18:06:15.016281 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 18:06:15 crc kubenswrapper[4877]: I1211 18:06:15.079789 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 11 18:06:15 crc kubenswrapper[4877]: I1211 18:06:15.082223 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 11 18:06:15 crc kubenswrapper[4877]: I1211 18:06:15.694899 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6fb796c88-7b6rs_b5744078-060f-41c5-9af2-45078cebcfdc/oauth-openshift/0.log" Dec 11 18:06:15 crc kubenswrapper[4877]: I1211 18:06:15.695180 4877 generic.go:334] "Generic (PLEG): container finished" podID="b5744078-060f-41c5-9af2-45078cebcfdc" containerID="35794afa44fa2c1b16e03afa67e7678fe23cb4b26f4db0daae992dc7600f7c3a" exitCode=255 Dec 11 18:06:15 crc kubenswrapper[4877]: I1211 18:06:15.695216 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" event={"ID":"b5744078-060f-41c5-9af2-45078cebcfdc","Type":"ContainerDied","Data":"35794afa44fa2c1b16e03afa67e7678fe23cb4b26f4db0daae992dc7600f7c3a"} Dec 11 18:06:15 crc kubenswrapper[4877]: I1211 18:06:15.695913 4877 scope.go:117] "RemoveContainer" containerID="35794afa44fa2c1b16e03afa67e7678fe23cb4b26f4db0daae992dc7600f7c3a" Dec 11 18:06:15 crc kubenswrapper[4877]: I1211 18:06:15.903559 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 11 18:06:15 crc kubenswrapper[4877]: I1211 18:06:15.904084 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 18:06:16 crc kubenswrapper[4877]: I1211 18:06:16.303531 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 11 18:06:16 crc kubenswrapper[4877]: I1211 18:06:16.527742 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 11 18:06:16 crc kubenswrapper[4877]: I1211 18:06:16.704569 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6fb796c88-7b6rs_b5744078-060f-41c5-9af2-45078cebcfdc/oauth-openshift/1.log" Dec 11 18:06:16 crc kubenswrapper[4877]: I1211 18:06:16.705612 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6fb796c88-7b6rs_b5744078-060f-41c5-9af2-45078cebcfdc/oauth-openshift/0.log" Dec 11 18:06:16 crc kubenswrapper[4877]: I1211 18:06:16.705679 4877 generic.go:334] "Generic (PLEG): container finished" podID="b5744078-060f-41c5-9af2-45078cebcfdc" containerID="74d729fbded91145f922bde2e98abb1cf1bc14ec1cfc0494e7bc9828df0bcb96" exitCode=255 Dec 11 18:06:16 crc kubenswrapper[4877]: I1211 18:06:16.705718 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" event={"ID":"b5744078-060f-41c5-9af2-45078cebcfdc","Type":"ContainerDied","Data":"74d729fbded91145f922bde2e98abb1cf1bc14ec1cfc0494e7bc9828df0bcb96"} Dec 11 18:06:16 crc kubenswrapper[4877]: I1211 18:06:16.705767 4877 scope.go:117] "RemoveContainer" containerID="35794afa44fa2c1b16e03afa67e7678fe23cb4b26f4db0daae992dc7600f7c3a" Dec 11 18:06:16 crc kubenswrapper[4877]: I1211 18:06:16.706579 4877 scope.go:117] "RemoveContainer" containerID="74d729fbded91145f922bde2e98abb1cf1bc14ec1cfc0494e7bc9828df0bcb96" Dec 11 18:06:16 crc kubenswrapper[4877]: E1211 18:06:16.706855 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-6fb796c88-7b6rs_openshift-authentication(b5744078-060f-41c5-9af2-45078cebcfdc)\"" pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" podUID="b5744078-060f-41c5-9af2-45078cebcfdc" Dec 11 18:06:16 crc kubenswrapper[4877]: I1211 18:06:16.844504 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:16 crc kubenswrapper[4877]: I1211 18:06:16.844631 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:16 crc kubenswrapper[4877]: I1211 18:06:16.927009 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 11 18:06:17 crc kubenswrapper[4877]: I1211 18:06:17.308408 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 11 18:06:17 crc kubenswrapper[4877]: I1211 18:06:17.504707 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 11 18:06:17 crc kubenswrapper[4877]: I1211 18:06:17.523447 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 11 18:06:17 crc kubenswrapper[4877]: I1211 18:06:17.713582 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6fb796c88-7b6rs_b5744078-060f-41c5-9af2-45078cebcfdc/oauth-openshift/1.log" Dec 11 18:06:17 crc kubenswrapper[4877]: I1211 18:06:17.714457 4877 scope.go:117] "RemoveContainer" containerID="74d729fbded91145f922bde2e98abb1cf1bc14ec1cfc0494e7bc9828df0bcb96" Dec 11 18:06:17 crc kubenswrapper[4877]: E1211 18:06:17.714722 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-6fb796c88-7b6rs_openshift-authentication(b5744078-060f-41c5-9af2-45078cebcfdc)\"" pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" podUID="b5744078-060f-41c5-9af2-45078cebcfdc" Dec 11 18:06:17 crc kubenswrapper[4877]: I1211 18:06:17.715921 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 11 18:06:17 crc kubenswrapper[4877]: I1211 18:06:17.716015 4877 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="f70b362ecacb9ebe36a33fe3f025b3eae0e1cb2f7bc82a00aa1d8dafe1da6328" exitCode=137 Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.078195 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.078304 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.086641 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.127233 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.173762 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.177683 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.184243 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.184300 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.184397 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.184422 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.184452 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.184472 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.184506 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.184497 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.184731 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.184761 4877 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.184774 4877 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.184784 4877 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.195501 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.218763 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.286447 4877 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.286498 4877 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.698619 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.722973 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.723159 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.723182 4877 scope.go:117] "RemoveContainer" containerID="f70b362ecacb9ebe36a33fe3f025b3eae0e1cb2f7bc82a00aa1d8dafe1da6328" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.723868 4877 scope.go:117] "RemoveContainer" containerID="74d729fbded91145f922bde2e98abb1cf1bc14ec1cfc0494e7bc9828df0bcb96" Dec 11 18:06:18 crc kubenswrapper[4877]: E1211 18:06:18.724137 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-6fb796c88-7b6rs_openshift-authentication(b5744078-060f-41c5-9af2-45078cebcfdc)\"" pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" podUID="b5744078-060f-41c5-9af2-45078cebcfdc" Dec 11 18:06:18 crc kubenswrapper[4877]: I1211 18:06:18.982132 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 11 18:06:19 crc kubenswrapper[4877]: I1211 18:06:19.223524 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 11 18:06:19 crc kubenswrapper[4877]: I1211 18:06:19.223816 4877 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 11 18:06:19 crc kubenswrapper[4877]: I1211 18:06:19.239714 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 18:06:19 crc kubenswrapper[4877]: I1211 18:06:19.239755 4877 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a6d7a57b-22d8-4336-b20e-cc3378cf1fc7" Dec 11 18:06:19 crc kubenswrapper[4877]: I1211 18:06:19.246025 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 18:06:19 crc kubenswrapper[4877]: I1211 18:06:19.246094 4877 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a6d7a57b-22d8-4336-b20e-cc3378cf1fc7" Dec 11 18:06:19 crc kubenswrapper[4877]: I1211 18:06:19.318176 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:06:19 crc kubenswrapper[4877]: I1211 18:06:19.318235 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:06:19 crc kubenswrapper[4877]: I1211 18:06:19.318759 4877 scope.go:117] "RemoveContainer" containerID="1f751b690292df66d800fdaad04085503d1e16a37eca6163b99a75c103b07a03" Dec 11 18:06:19 crc kubenswrapper[4877]: E1211 18:06:19.319059 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-4qvdz_openshift-marketplace(1eaf037c-b9a9-4c1b-b108-0ffcad610322)\"" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" podUID="1eaf037c-b9a9-4c1b-b108-0ffcad610322" Dec 11 18:06:29 crc kubenswrapper[4877]: I1211 18:06:29.222676 4877 scope.go:117] "RemoveContainer" containerID="74d729fbded91145f922bde2e98abb1cf1bc14ec1cfc0494e7bc9828df0bcb96" Dec 11 18:06:29 crc kubenswrapper[4877]: I1211 18:06:29.798787 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6fb796c88-7b6rs_b5744078-060f-41c5-9af2-45078cebcfdc/oauth-openshift/1.log" Dec 11 18:06:29 crc kubenswrapper[4877]: I1211 18:06:29.799101 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" event={"ID":"b5744078-060f-41c5-9af2-45078cebcfdc","Type":"ContainerStarted","Data":"dc8674676784b4290217be5fe11edd8b67454205d5ab22f6cee7a8eb7e0e2c05"} Dec 11 18:06:29 crc kubenswrapper[4877]: I1211 18:06:29.800722 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:29 crc kubenswrapper[4877]: I1211 18:06:29.837919 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" podStartSLOduration=110.837899424 podStartE2EDuration="1m50.837899424s" podCreationTimestamp="2025-12-11 18:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:06:29.82909906 +0000 UTC m=+350.855343144" watchObservedRunningTime="2025-12-11 18:06:29.837899424 +0000 UTC m=+350.864143468" Dec 11 18:06:29 crc kubenswrapper[4877]: I1211 18:06:29.838349 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6fb796c88-7b6rs" Dec 11 18:06:30 crc kubenswrapper[4877]: I1211 18:06:30.215257 4877 scope.go:117] "RemoveContainer" containerID="1f751b690292df66d800fdaad04085503d1e16a37eca6163b99a75c103b07a03" Dec 11 18:06:30 crc kubenswrapper[4877]: I1211 18:06:30.809030 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4qvdz_1eaf037c-b9a9-4c1b-b108-0ffcad610322/marketplace-operator/1.log" Dec 11 18:06:30 crc kubenswrapper[4877]: I1211 18:06:30.810659 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" event={"ID":"1eaf037c-b9a9-4c1b-b108-0ffcad610322","Type":"ContainerStarted","Data":"f78db6acaac899a97f3285bd11bd58b7d9c109704431ffa8f69bb121b592a258"} Dec 11 18:06:30 crc kubenswrapper[4877]: I1211 18:06:30.811984 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:06:30 crc kubenswrapper[4877]: I1211 18:06:30.817267 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4qvdz" Dec 11 18:06:46 crc kubenswrapper[4877]: I1211 18:06:46.638464 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:06:46 crc kubenswrapper[4877]: I1211 18:06:46.639498 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.096544 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cxfwb"] Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.097451 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" podUID="2af0d3b7-c390-4fce-93a7-dea3cb5325d7" containerName="controller-manager" containerID="cri-o://4558d96b842da65bff42f2b15c0a6f0403ecdfcac11fb91dfe6ae0e883209a2d" gracePeriod=30 Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.191696 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t"] Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.192400 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" podUID="884770ef-a741-47ce-bdde-79844ff9f886" containerName="route-controller-manager" containerID="cri-o://ffc38f4e6fdb737619e3c761ec823635c1c489c91ee6f03c3667f743b1229a58" gracePeriod=30 Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.493503 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.566276 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.614131 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8fsk\" (UniqueName: \"kubernetes.io/projected/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-kube-api-access-w8fsk\") pod \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.614215 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-config\") pod \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.614269 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-serving-cert\") pod \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.614407 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-proxy-ca-bundles\") pod \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.615314 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-config" (OuterVolumeSpecName: "config") pod "2af0d3b7-c390-4fce-93a7-dea3cb5325d7" (UID: "2af0d3b7-c390-4fce-93a7-dea3cb5325d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.616359 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2af0d3b7-c390-4fce-93a7-dea3cb5325d7" (UID: "2af0d3b7-c390-4fce-93a7-dea3cb5325d7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.616552 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-client-ca\") pod \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\" (UID: \"2af0d3b7-c390-4fce-93a7-dea3cb5325d7\") " Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.616956 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-client-ca" (OuterVolumeSpecName: "client-ca") pod "2af0d3b7-c390-4fce-93a7-dea3cb5325d7" (UID: "2af0d3b7-c390-4fce-93a7-dea3cb5325d7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.617035 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/884770ef-a741-47ce-bdde-79844ff9f886-client-ca\") pod \"884770ef-a741-47ce-bdde-79844ff9f886\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.617074 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/884770ef-a741-47ce-bdde-79844ff9f886-config\") pod \"884770ef-a741-47ce-bdde-79844ff9f886\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.617095 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/884770ef-a741-47ce-bdde-79844ff9f886-serving-cert\") pod \"884770ef-a741-47ce-bdde-79844ff9f886\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.617205 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx4vd\" (UniqueName: \"kubernetes.io/projected/884770ef-a741-47ce-bdde-79844ff9f886-kube-api-access-qx4vd\") pod \"884770ef-a741-47ce-bdde-79844ff9f886\" (UID: \"884770ef-a741-47ce-bdde-79844ff9f886\") " Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.617723 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.617750 4877 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.617764 4877 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.618497 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/884770ef-a741-47ce-bdde-79844ff9f886-config" (OuterVolumeSpecName: "config") pod "884770ef-a741-47ce-bdde-79844ff9f886" (UID: "884770ef-a741-47ce-bdde-79844ff9f886"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.619302 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/884770ef-a741-47ce-bdde-79844ff9f886-client-ca" (OuterVolumeSpecName: "client-ca") pod "884770ef-a741-47ce-bdde-79844ff9f886" (UID: "884770ef-a741-47ce-bdde-79844ff9f886"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.633581 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-kube-api-access-w8fsk" (OuterVolumeSpecName: "kube-api-access-w8fsk") pod "2af0d3b7-c390-4fce-93a7-dea3cb5325d7" (UID: "2af0d3b7-c390-4fce-93a7-dea3cb5325d7"). InnerVolumeSpecName "kube-api-access-w8fsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.640937 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884770ef-a741-47ce-bdde-79844ff9f886-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "884770ef-a741-47ce-bdde-79844ff9f886" (UID: "884770ef-a741-47ce-bdde-79844ff9f886"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.641218 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2af0d3b7-c390-4fce-93a7-dea3cb5325d7" (UID: "2af0d3b7-c390-4fce-93a7-dea3cb5325d7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.641501 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/884770ef-a741-47ce-bdde-79844ff9f886-kube-api-access-qx4vd" (OuterVolumeSpecName: "kube-api-access-qx4vd") pod "884770ef-a741-47ce-bdde-79844ff9f886" (UID: "884770ef-a741-47ce-bdde-79844ff9f886"). InnerVolumeSpecName "kube-api-access-qx4vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.718728 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx4vd\" (UniqueName: \"kubernetes.io/projected/884770ef-a741-47ce-bdde-79844ff9f886-kube-api-access-qx4vd\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.718779 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8fsk\" (UniqueName: \"kubernetes.io/projected/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-kube-api-access-w8fsk\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.718791 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2af0d3b7-c390-4fce-93a7-dea3cb5325d7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.718802 4877 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/884770ef-a741-47ce-bdde-79844ff9f886-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.718811 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/884770ef-a741-47ce-bdde-79844ff9f886-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:05 crc kubenswrapper[4877]: I1211 18:07:05.718820 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/884770ef-a741-47ce-bdde-79844ff9f886-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.048659 4877 generic.go:334] "Generic (PLEG): container finished" podID="884770ef-a741-47ce-bdde-79844ff9f886" containerID="ffc38f4e6fdb737619e3c761ec823635c1c489c91ee6f03c3667f743b1229a58" exitCode=0 Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.048807 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" event={"ID":"884770ef-a741-47ce-bdde-79844ff9f886","Type":"ContainerDied","Data":"ffc38f4e6fdb737619e3c761ec823635c1c489c91ee6f03c3667f743b1229a58"} Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.048852 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.048914 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t" event={"ID":"884770ef-a741-47ce-bdde-79844ff9f886","Type":"ContainerDied","Data":"b8d8648d41ab66ba29d7f7a8de66885b1b1bd53b099f6e9c80d5a8c77fb7ed6a"} Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.048960 4877 scope.go:117] "RemoveContainer" containerID="ffc38f4e6fdb737619e3c761ec823635c1c489c91ee6f03c3667f743b1229a58" Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.050942 4877 generic.go:334] "Generic (PLEG): container finished" podID="2af0d3b7-c390-4fce-93a7-dea3cb5325d7" containerID="4558d96b842da65bff42f2b15c0a6f0403ecdfcac11fb91dfe6ae0e883209a2d" exitCode=0 Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.050993 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" event={"ID":"2af0d3b7-c390-4fce-93a7-dea3cb5325d7","Type":"ContainerDied","Data":"4558d96b842da65bff42f2b15c0a6f0403ecdfcac11fb91dfe6ae0e883209a2d"} Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.051024 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" event={"ID":"2af0d3b7-c390-4fce-93a7-dea3cb5325d7","Type":"ContainerDied","Data":"cc41b3797eed8daabf3f865c2c3742efed2ef33893846bf5d33399175d6e7d08"} Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.051108 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cxfwb" Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.069976 4877 scope.go:117] "RemoveContainer" containerID="ffc38f4e6fdb737619e3c761ec823635c1c489c91ee6f03c3667f743b1229a58" Dec 11 18:07:06 crc kubenswrapper[4877]: E1211 18:07:06.073257 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffc38f4e6fdb737619e3c761ec823635c1c489c91ee6f03c3667f743b1229a58\": container with ID starting with ffc38f4e6fdb737619e3c761ec823635c1c489c91ee6f03c3667f743b1229a58 not found: ID does not exist" containerID="ffc38f4e6fdb737619e3c761ec823635c1c489c91ee6f03c3667f743b1229a58" Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.073333 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc38f4e6fdb737619e3c761ec823635c1c489c91ee6f03c3667f743b1229a58"} err="failed to get container status \"ffc38f4e6fdb737619e3c761ec823635c1c489c91ee6f03c3667f743b1229a58\": rpc error: code = NotFound desc = could not find container \"ffc38f4e6fdb737619e3c761ec823635c1c489c91ee6f03c3667f743b1229a58\": container with ID starting with ffc38f4e6fdb737619e3c761ec823635c1c489c91ee6f03c3667f743b1229a58 not found: ID does not exist" Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.073391 4877 scope.go:117] "RemoveContainer" containerID="4558d96b842da65bff42f2b15c0a6f0403ecdfcac11fb91dfe6ae0e883209a2d" Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.097797 4877 scope.go:117] "RemoveContainer" containerID="4558d96b842da65bff42f2b15c0a6f0403ecdfcac11fb91dfe6ae0e883209a2d" Dec 11 18:07:06 crc kubenswrapper[4877]: E1211 18:07:06.098341 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4558d96b842da65bff42f2b15c0a6f0403ecdfcac11fb91dfe6ae0e883209a2d\": container with ID starting with 4558d96b842da65bff42f2b15c0a6f0403ecdfcac11fb91dfe6ae0e883209a2d not found: ID does not exist" containerID="4558d96b842da65bff42f2b15c0a6f0403ecdfcac11fb91dfe6ae0e883209a2d" Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.098395 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4558d96b842da65bff42f2b15c0a6f0403ecdfcac11fb91dfe6ae0e883209a2d"} err="failed to get container status \"4558d96b842da65bff42f2b15c0a6f0403ecdfcac11fb91dfe6ae0e883209a2d\": rpc error: code = NotFound desc = could not find container \"4558d96b842da65bff42f2b15c0a6f0403ecdfcac11fb91dfe6ae0e883209a2d\": container with ID starting with 4558d96b842da65bff42f2b15c0a6f0403ecdfcac11fb91dfe6ae0e883209a2d not found: ID does not exist" Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.104090 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t"] Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.110057 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mgg8t"] Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.116058 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cxfwb"] Dec 11 18:07:06 crc kubenswrapper[4877]: I1211 18:07:06.122137 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cxfwb"] Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.003658 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9"] Dec 11 18:07:07 crc kubenswrapper[4877]: E1211 18:07:07.003969 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="884770ef-a741-47ce-bdde-79844ff9f886" containerName="route-controller-manager" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.003983 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="884770ef-a741-47ce-bdde-79844ff9f886" containerName="route-controller-manager" Dec 11 18:07:07 crc kubenswrapper[4877]: E1211 18:07:07.003995 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af0d3b7-c390-4fce-93a7-dea3cb5325d7" containerName="controller-manager" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.004002 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af0d3b7-c390-4fce-93a7-dea3cb5325d7" containerName="controller-manager" Dec 11 18:07:07 crc kubenswrapper[4877]: E1211 18:07:07.004015 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.004022 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.004139 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="2af0d3b7-c390-4fce-93a7-dea3cb5325d7" containerName="controller-manager" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.004150 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="884770ef-a741-47ce-bdde-79844ff9f886" containerName="route-controller-manager" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.004167 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.004751 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.008488 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.008648 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.008899 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.010187 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-796b84794c-hsh7s"] Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.011208 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.011477 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.011711 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.012216 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.014956 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.015268 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.015534 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.015746 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.015991 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.016205 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.020640 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-796b84794c-hsh7s"] Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.025059 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.026944 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9"] Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.159765 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-config\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.159822 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v5h2\" (UniqueName: \"kubernetes.io/projected/f9a32006-a978-4515-b8ad-8f0a84aff409-kube-api-access-4v5h2\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.159851 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9977fb7e-438a-4838-9c4e-d7087f5fcd84-client-ca\") pod \"route-controller-manager-96b64b5cc-mqdq9\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.159887 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndfxj\" (UniqueName: \"kubernetes.io/projected/9977fb7e-438a-4838-9c4e-d7087f5fcd84-kube-api-access-ndfxj\") pod \"route-controller-manager-96b64b5cc-mqdq9\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.159913 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-client-ca\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.159935 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-proxy-ca-bundles\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.159955 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9977fb7e-438a-4838-9c4e-d7087f5fcd84-serving-cert\") pod \"route-controller-manager-96b64b5cc-mqdq9\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.160362 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a32006-a978-4515-b8ad-8f0a84aff409-serving-cert\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.160610 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9977fb7e-438a-4838-9c4e-d7087f5fcd84-config\") pod \"route-controller-manager-96b64b5cc-mqdq9\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.225156 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2af0d3b7-c390-4fce-93a7-dea3cb5325d7" path="/var/lib/kubelet/pods/2af0d3b7-c390-4fce-93a7-dea3cb5325d7/volumes" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.226728 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="884770ef-a741-47ce-bdde-79844ff9f886" path="/var/lib/kubelet/pods/884770ef-a741-47ce-bdde-79844ff9f886/volumes" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.262691 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-client-ca\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.262753 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-proxy-ca-bundles\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.262787 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9977fb7e-438a-4838-9c4e-d7087f5fcd84-serving-cert\") pod \"route-controller-manager-96b64b5cc-mqdq9\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.262982 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a32006-a978-4515-b8ad-8f0a84aff409-serving-cert\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.263101 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9977fb7e-438a-4838-9c4e-d7087f5fcd84-config\") pod \"route-controller-manager-96b64b5cc-mqdq9\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.263181 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-config\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.264411 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v5h2\" (UniqueName: \"kubernetes.io/projected/f9a32006-a978-4515-b8ad-8f0a84aff409-kube-api-access-4v5h2\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.264452 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9977fb7e-438a-4838-9c4e-d7087f5fcd84-client-ca\") pod \"route-controller-manager-96b64b5cc-mqdq9\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.264566 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndfxj\" (UniqueName: \"kubernetes.io/projected/9977fb7e-438a-4838-9c4e-d7087f5fcd84-kube-api-access-ndfxj\") pod \"route-controller-manager-96b64b5cc-mqdq9\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.265194 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-proxy-ca-bundles\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.265247 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-client-ca\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.265468 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-config\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.265731 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9977fb7e-438a-4838-9c4e-d7087f5fcd84-client-ca\") pod \"route-controller-manager-96b64b5cc-mqdq9\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.266843 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9977fb7e-438a-4838-9c4e-d7087f5fcd84-config\") pod \"route-controller-manager-96b64b5cc-mqdq9\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.267819 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a32006-a978-4515-b8ad-8f0a84aff409-serving-cert\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.278920 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9977fb7e-438a-4838-9c4e-d7087f5fcd84-serving-cert\") pod \"route-controller-manager-96b64b5cc-mqdq9\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.282536 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndfxj\" (UniqueName: \"kubernetes.io/projected/9977fb7e-438a-4838-9c4e-d7087f5fcd84-kube-api-access-ndfxj\") pod \"route-controller-manager-96b64b5cc-mqdq9\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.296559 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v5h2\" (UniqueName: \"kubernetes.io/projected/f9a32006-a978-4515-b8ad-8f0a84aff409-kube-api-access-4v5h2\") pod \"controller-manager-796b84794c-hsh7s\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.332217 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.348694 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.579826 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9"] Dec 11 18:07:07 crc kubenswrapper[4877]: I1211 18:07:07.825465 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-796b84794c-hsh7s"] Dec 11 18:07:07 crc kubenswrapper[4877]: W1211 18:07:07.831610 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a32006_a978_4515_b8ad_8f0a84aff409.slice/crio-999372d82be38cbfe91f963346df8c1cc7a9c3947179419ef77f33a4695d87e9 WatchSource:0}: Error finding container 999372d82be38cbfe91f963346df8c1cc7a9c3947179419ef77f33a4695d87e9: Status 404 returned error can't find the container with id 999372d82be38cbfe91f963346df8c1cc7a9c3947179419ef77f33a4695d87e9 Dec 11 18:07:08 crc kubenswrapper[4877]: I1211 18:07:08.072971 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" event={"ID":"9977fb7e-438a-4838-9c4e-d7087f5fcd84","Type":"ContainerStarted","Data":"5be20407c5d0953c331b0ee52f19b78fcec9154335bdc491def7d75ffe7fe55a"} Dec 11 18:07:08 crc kubenswrapper[4877]: I1211 18:07:08.073028 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" event={"ID":"9977fb7e-438a-4838-9c4e-d7087f5fcd84","Type":"ContainerStarted","Data":"6141b7cc25d61cef9425fa7784ea27b8a0c20f2b5b94cf37cca84b3f35d3b17a"} Dec 11 18:07:08 crc kubenswrapper[4877]: I1211 18:07:08.074747 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:08 crc kubenswrapper[4877]: I1211 18:07:08.076359 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" event={"ID":"f9a32006-a978-4515-b8ad-8f0a84aff409","Type":"ContainerStarted","Data":"3563a24a86f7a8fcc47e11127e4f9f925eb7e6de87be1a4e9d72725d174f7131"} Dec 11 18:07:08 crc kubenswrapper[4877]: I1211 18:07:08.076459 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" event={"ID":"f9a32006-a978-4515-b8ad-8f0a84aff409","Type":"ContainerStarted","Data":"999372d82be38cbfe91f963346df8c1cc7a9c3947179419ef77f33a4695d87e9"} Dec 11 18:07:08 crc kubenswrapper[4877]: I1211 18:07:08.076646 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:08 crc kubenswrapper[4877]: I1211 18:07:08.082574 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:08 crc kubenswrapper[4877]: I1211 18:07:08.096652 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" podStartSLOduration=3.096635902 podStartE2EDuration="3.096635902s" podCreationTimestamp="2025-12-11 18:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:07:08.094536569 +0000 UTC m=+389.120780623" watchObservedRunningTime="2025-12-11 18:07:08.096635902 +0000 UTC m=+389.122879966" Dec 11 18:07:08 crc kubenswrapper[4877]: I1211 18:07:08.111729 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:08 crc kubenswrapper[4877]: I1211 18:07:08.118601 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" podStartSLOduration=3.11857573 podStartE2EDuration="3.11857573s" podCreationTimestamp="2025-12-11 18:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:07:08.114033604 +0000 UTC m=+389.140277688" watchObservedRunningTime="2025-12-11 18:07:08.11857573 +0000 UTC m=+389.144819774" Dec 11 18:07:14 crc kubenswrapper[4877]: I1211 18:07:14.856119 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-796b84794c-hsh7s"] Dec 11 18:07:14 crc kubenswrapper[4877]: I1211 18:07:14.856939 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" podUID="f9a32006-a978-4515-b8ad-8f0a84aff409" containerName="controller-manager" containerID="cri-o://3563a24a86f7a8fcc47e11127e4f9f925eb7e6de87be1a4e9d72725d174f7131" gracePeriod=30 Dec 11 18:07:14 crc kubenswrapper[4877]: I1211 18:07:14.862637 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9"] Dec 11 18:07:14 crc kubenswrapper[4877]: I1211 18:07:14.862982 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" podUID="9977fb7e-438a-4838-9c4e-d7087f5fcd84" containerName="route-controller-manager" containerID="cri-o://5be20407c5d0953c331b0ee52f19b78fcec9154335bdc491def7d75ffe7fe55a" gracePeriod=30 Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.126206 4877 generic.go:334] "Generic (PLEG): container finished" podID="f9a32006-a978-4515-b8ad-8f0a84aff409" containerID="3563a24a86f7a8fcc47e11127e4f9f925eb7e6de87be1a4e9d72725d174f7131" exitCode=0 Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.126294 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" event={"ID":"f9a32006-a978-4515-b8ad-8f0a84aff409","Type":"ContainerDied","Data":"3563a24a86f7a8fcc47e11127e4f9f925eb7e6de87be1a4e9d72725d174f7131"} Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.128697 4877 generic.go:334] "Generic (PLEG): container finished" podID="9977fb7e-438a-4838-9c4e-d7087f5fcd84" containerID="5be20407c5d0953c331b0ee52f19b78fcec9154335bdc491def7d75ffe7fe55a" exitCode=0 Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.128736 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" event={"ID":"9977fb7e-438a-4838-9c4e-d7087f5fcd84","Type":"ContainerDied","Data":"5be20407c5d0953c331b0ee52f19b78fcec9154335bdc491def7d75ffe7fe55a"} Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.374785 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.485958 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9977fb7e-438a-4838-9c4e-d7087f5fcd84-config\") pod \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.486018 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndfxj\" (UniqueName: \"kubernetes.io/projected/9977fb7e-438a-4838-9c4e-d7087f5fcd84-kube-api-access-ndfxj\") pod \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.486051 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9977fb7e-438a-4838-9c4e-d7087f5fcd84-client-ca\") pod \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.486128 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9977fb7e-438a-4838-9c4e-d7087f5fcd84-serving-cert\") pod \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\" (UID: \"9977fb7e-438a-4838-9c4e-d7087f5fcd84\") " Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.487238 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9977fb7e-438a-4838-9c4e-d7087f5fcd84-client-ca" (OuterVolumeSpecName: "client-ca") pod "9977fb7e-438a-4838-9c4e-d7087f5fcd84" (UID: "9977fb7e-438a-4838-9c4e-d7087f5fcd84"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.487282 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9977fb7e-438a-4838-9c4e-d7087f5fcd84-config" (OuterVolumeSpecName: "config") pod "9977fb7e-438a-4838-9c4e-d7087f5fcd84" (UID: "9977fb7e-438a-4838-9c4e-d7087f5fcd84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.492512 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9977fb7e-438a-4838-9c4e-d7087f5fcd84-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9977fb7e-438a-4838-9c4e-d7087f5fcd84" (UID: "9977fb7e-438a-4838-9c4e-d7087f5fcd84"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.492543 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9977fb7e-438a-4838-9c4e-d7087f5fcd84-kube-api-access-ndfxj" (OuterVolumeSpecName: "kube-api-access-ndfxj") pod "9977fb7e-438a-4838-9c4e-d7087f5fcd84" (UID: "9977fb7e-438a-4838-9c4e-d7087f5fcd84"). InnerVolumeSpecName "kube-api-access-ndfxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.504269 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.588283 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9977fb7e-438a-4838-9c4e-d7087f5fcd84-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.588325 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9977fb7e-438a-4838-9c4e-d7087f5fcd84-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.588341 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndfxj\" (UniqueName: \"kubernetes.io/projected/9977fb7e-438a-4838-9c4e-d7087f5fcd84-kube-api-access-ndfxj\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.588353 4877 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9977fb7e-438a-4838-9c4e-d7087f5fcd84-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.689144 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-client-ca\") pod \"f9a32006-a978-4515-b8ad-8f0a84aff409\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.689938 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v5h2\" (UniqueName: \"kubernetes.io/projected/f9a32006-a978-4515-b8ad-8f0a84aff409-kube-api-access-4v5h2\") pod \"f9a32006-a978-4515-b8ad-8f0a84aff409\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.689990 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-config\") pod \"f9a32006-a978-4515-b8ad-8f0a84aff409\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.690016 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a32006-a978-4515-b8ad-8f0a84aff409-serving-cert\") pod \"f9a32006-a978-4515-b8ad-8f0a84aff409\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.690043 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-proxy-ca-bundles\") pod \"f9a32006-a978-4515-b8ad-8f0a84aff409\" (UID: \"f9a32006-a978-4515-b8ad-8f0a84aff409\") " Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.690307 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-client-ca" (OuterVolumeSpecName: "client-ca") pod "f9a32006-a978-4515-b8ad-8f0a84aff409" (UID: "f9a32006-a978-4515-b8ad-8f0a84aff409"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.690991 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f9a32006-a978-4515-b8ad-8f0a84aff409" (UID: "f9a32006-a978-4515-b8ad-8f0a84aff409"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.691445 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-config" (OuterVolumeSpecName: "config") pod "f9a32006-a978-4515-b8ad-8f0a84aff409" (UID: "f9a32006-a978-4515-b8ad-8f0a84aff409"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.694114 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a32006-a978-4515-b8ad-8f0a84aff409-kube-api-access-4v5h2" (OuterVolumeSpecName: "kube-api-access-4v5h2") pod "f9a32006-a978-4515-b8ad-8f0a84aff409" (UID: "f9a32006-a978-4515-b8ad-8f0a84aff409"). InnerVolumeSpecName "kube-api-access-4v5h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.694248 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9a32006-a978-4515-b8ad-8f0a84aff409-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f9a32006-a978-4515-b8ad-8f0a84aff409" (UID: "f9a32006-a978-4515-b8ad-8f0a84aff409"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.791985 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v5h2\" (UniqueName: \"kubernetes.io/projected/f9a32006-a978-4515-b8ad-8f0a84aff409-kube-api-access-4v5h2\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.792030 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.792040 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a32006-a978-4515-b8ad-8f0a84aff409-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.792052 4877 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:15 crc kubenswrapper[4877]: I1211 18:07:15.792060 4877 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9a32006-a978-4515-b8ad-8f0a84aff409-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.008656 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr"] Dec 11 18:07:16 crc kubenswrapper[4877]: E1211 18:07:16.009082 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a32006-a978-4515-b8ad-8f0a84aff409" containerName="controller-manager" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.009103 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a32006-a978-4515-b8ad-8f0a84aff409" containerName="controller-manager" Dec 11 18:07:16 crc kubenswrapper[4877]: E1211 18:07:16.009131 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9977fb7e-438a-4838-9c4e-d7087f5fcd84" containerName="route-controller-manager" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.009142 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="9977fb7e-438a-4838-9c4e-d7087f5fcd84" containerName="route-controller-manager" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.009316 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a32006-a978-4515-b8ad-8f0a84aff409" containerName="controller-manager" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.009335 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="9977fb7e-438a-4838-9c4e-d7087f5fcd84" containerName="route-controller-manager" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.009886 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.017556 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84765f478-pf2d5"] Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.019463 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.030584 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr"] Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.044142 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84765f478-pf2d5"] Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.137129 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" event={"ID":"f9a32006-a978-4515-b8ad-8f0a84aff409","Type":"ContainerDied","Data":"999372d82be38cbfe91f963346df8c1cc7a9c3947179419ef77f33a4695d87e9"} Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.137163 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b84794c-hsh7s" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.137198 4877 scope.go:117] "RemoveContainer" containerID="3563a24a86f7a8fcc47e11127e4f9f925eb7e6de87be1a4e9d72725d174f7131" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.139950 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" event={"ID":"9977fb7e-438a-4838-9c4e-d7087f5fcd84","Type":"ContainerDied","Data":"6141b7cc25d61cef9425fa7784ea27b8a0c20f2b5b94cf37cca84b3f35d3b17a"} Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.140025 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.160182 4877 scope.go:117] "RemoveContainer" containerID="5be20407c5d0953c331b0ee52f19b78fcec9154335bdc491def7d75ffe7fe55a" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.183924 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9"] Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.192726 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96b64b5cc-mqdq9"] Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.196651 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-796b84794c-hsh7s"] Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.198595 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klz7p\" (UniqueName: \"kubernetes.io/projected/6aeec62c-8a19-4c40-9661-066058d119a4-kube-api-access-klz7p\") pod \"route-controller-manager-65b49dc5b6-8qzdr\" (UID: \"6aeec62c-8a19-4c40-9661-066058d119a4\") " pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.198637 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6aeec62c-8a19-4c40-9661-066058d119a4-client-ca\") pod \"route-controller-manager-65b49dc5b6-8qzdr\" (UID: \"6aeec62c-8a19-4c40-9661-066058d119a4\") " pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.198670 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9654560d-0488-4a57-a1ad-dfda774c82e2-serving-cert\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.198695 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-client-ca\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.198725 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-proxy-ca-bundles\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.198805 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aeec62c-8a19-4c40-9661-066058d119a4-serving-cert\") pod \"route-controller-manager-65b49dc5b6-8qzdr\" (UID: \"6aeec62c-8a19-4c40-9661-066058d119a4\") " pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.198868 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wmmz\" (UniqueName: \"kubernetes.io/projected/9654560d-0488-4a57-a1ad-dfda774c82e2-kube-api-access-8wmmz\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.198905 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aeec62c-8a19-4c40-9661-066058d119a4-config\") pod \"route-controller-manager-65b49dc5b6-8qzdr\" (UID: \"6aeec62c-8a19-4c40-9661-066058d119a4\") " pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.198936 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-config\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.199671 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-796b84794c-hsh7s"] Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.300467 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klz7p\" (UniqueName: \"kubernetes.io/projected/6aeec62c-8a19-4c40-9661-066058d119a4-kube-api-access-klz7p\") pod \"route-controller-manager-65b49dc5b6-8qzdr\" (UID: \"6aeec62c-8a19-4c40-9661-066058d119a4\") " pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.300538 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6aeec62c-8a19-4c40-9661-066058d119a4-client-ca\") pod \"route-controller-manager-65b49dc5b6-8qzdr\" (UID: \"6aeec62c-8a19-4c40-9661-066058d119a4\") " pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.300580 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9654560d-0488-4a57-a1ad-dfda774c82e2-serving-cert\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.300606 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-client-ca\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.300640 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-proxy-ca-bundles\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.300679 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aeec62c-8a19-4c40-9661-066058d119a4-serving-cert\") pod \"route-controller-manager-65b49dc5b6-8qzdr\" (UID: \"6aeec62c-8a19-4c40-9661-066058d119a4\") " pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.300721 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wmmz\" (UniqueName: \"kubernetes.io/projected/9654560d-0488-4a57-a1ad-dfda774c82e2-kube-api-access-8wmmz\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.300753 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aeec62c-8a19-4c40-9661-066058d119a4-config\") pod \"route-controller-manager-65b49dc5b6-8qzdr\" (UID: \"6aeec62c-8a19-4c40-9661-066058d119a4\") " pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.300815 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-config\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.301969 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6aeec62c-8a19-4c40-9661-066058d119a4-client-ca\") pod \"route-controller-manager-65b49dc5b6-8qzdr\" (UID: \"6aeec62c-8a19-4c40-9661-066058d119a4\") " pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.302454 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aeec62c-8a19-4c40-9661-066058d119a4-config\") pod \"route-controller-manager-65b49dc5b6-8qzdr\" (UID: \"6aeec62c-8a19-4c40-9661-066058d119a4\") " pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.302462 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-client-ca\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.303120 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-config\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.303174 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-proxy-ca-bundles\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.312935 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aeec62c-8a19-4c40-9661-066058d119a4-serving-cert\") pod \"route-controller-manager-65b49dc5b6-8qzdr\" (UID: \"6aeec62c-8a19-4c40-9661-066058d119a4\") " pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.312947 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9654560d-0488-4a57-a1ad-dfda774c82e2-serving-cert\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.327733 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klz7p\" (UniqueName: \"kubernetes.io/projected/6aeec62c-8a19-4c40-9661-066058d119a4-kube-api-access-klz7p\") pod \"route-controller-manager-65b49dc5b6-8qzdr\" (UID: \"6aeec62c-8a19-4c40-9661-066058d119a4\") " pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.329894 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wmmz\" (UniqueName: \"kubernetes.io/projected/9654560d-0488-4a57-a1ad-dfda774c82e2-kube-api-access-8wmmz\") pod \"controller-manager-84765f478-pf2d5\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.376075 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.385538 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.638271 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.638822 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.816982 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr"] Dec 11 18:07:16 crc kubenswrapper[4877]: I1211 18:07:16.864731 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84765f478-pf2d5"] Dec 11 18:07:16 crc kubenswrapper[4877]: W1211 18:07:16.871825 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9654560d_0488_4a57_a1ad_dfda774c82e2.slice/crio-2252f594c8180bd1aa24c4556abcbef0c009b128614735fc862c495cc0ba890b WatchSource:0}: Error finding container 2252f594c8180bd1aa24c4556abcbef0c009b128614735fc862c495cc0ba890b: Status 404 returned error can't find the container with id 2252f594c8180bd1aa24c4556abcbef0c009b128614735fc862c495cc0ba890b Dec 11 18:07:17 crc kubenswrapper[4877]: I1211 18:07:17.151130 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" event={"ID":"6aeec62c-8a19-4c40-9661-066058d119a4","Type":"ContainerStarted","Data":"e9d6d5a0fc9c2d419d20ff1ad29e4ca12bc5fed761b953836e8b5b9fd8fef4ea"} Dec 11 18:07:17 crc kubenswrapper[4877]: I1211 18:07:17.151552 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" event={"ID":"6aeec62c-8a19-4c40-9661-066058d119a4","Type":"ContainerStarted","Data":"98cfea2bbb65896d94eb9ff3e346548f491137f66cbb38d907d672875ff1f558"} Dec 11 18:07:17 crc kubenswrapper[4877]: I1211 18:07:17.151585 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:17 crc kubenswrapper[4877]: I1211 18:07:17.153099 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" event={"ID":"9654560d-0488-4a57-a1ad-dfda774c82e2","Type":"ContainerStarted","Data":"8a79d44f11b7997f6bb664c716927bf96470713a95ae27122f81cade030eac98"} Dec 11 18:07:17 crc kubenswrapper[4877]: I1211 18:07:17.153155 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" event={"ID":"9654560d-0488-4a57-a1ad-dfda774c82e2","Type":"ContainerStarted","Data":"2252f594c8180bd1aa24c4556abcbef0c009b128614735fc862c495cc0ba890b"} Dec 11 18:07:17 crc kubenswrapper[4877]: I1211 18:07:17.153288 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:17 crc kubenswrapper[4877]: I1211 18:07:17.159049 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:17 crc kubenswrapper[4877]: I1211 18:07:17.184347 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" podStartSLOduration=3.184319651 podStartE2EDuration="3.184319651s" podCreationTimestamp="2025-12-11 18:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:07:17.179750149 +0000 UTC m=+398.205994193" watchObservedRunningTime="2025-12-11 18:07:17.184319651 +0000 UTC m=+398.210563695" Dec 11 18:07:17 crc kubenswrapper[4877]: I1211 18:07:17.210181 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" podStartSLOduration=3.210148793 podStartE2EDuration="3.210148793s" podCreationTimestamp="2025-12-11 18:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:07:17.206156878 +0000 UTC m=+398.232400942" watchObservedRunningTime="2025-12-11 18:07:17.210148793 +0000 UTC m=+398.236392857" Dec 11 18:07:17 crc kubenswrapper[4877]: I1211 18:07:17.264093 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9977fb7e-438a-4838-9c4e-d7087f5fcd84" path="/var/lib/kubelet/pods/9977fb7e-438a-4838-9c4e-d7087f5fcd84/volumes" Dec 11 18:07:17 crc kubenswrapper[4877]: I1211 18:07:17.264874 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a32006-a978-4515-b8ad-8f0a84aff409" path="/var/lib/kubelet/pods/f9a32006-a978-4515-b8ad-8f0a84aff409/volumes" Dec 11 18:07:17 crc kubenswrapper[4877]: I1211 18:07:17.304785 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65b49dc5b6-8qzdr" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.488084 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dvf8r"] Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.490975 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.513259 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dvf8r"] Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.573973 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6407994b-94c2-43c5-bf44-62c43382fbcb-registry-tls\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.574031 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.574061 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5v54\" (UniqueName: \"kubernetes.io/projected/6407994b-94c2-43c5-bf44-62c43382fbcb-kube-api-access-z5v54\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.574081 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6407994b-94c2-43c5-bf44-62c43382fbcb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.574110 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6407994b-94c2-43c5-bf44-62c43382fbcb-registry-certificates\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.574208 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6407994b-94c2-43c5-bf44-62c43382fbcb-bound-sa-token\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.574233 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6407994b-94c2-43c5-bf44-62c43382fbcb-trusted-ca\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.574362 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6407994b-94c2-43c5-bf44-62c43382fbcb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.610762 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.675651 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5v54\" (UniqueName: \"kubernetes.io/projected/6407994b-94c2-43c5-bf44-62c43382fbcb-kube-api-access-z5v54\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.675696 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6407994b-94c2-43c5-bf44-62c43382fbcb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.675866 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6407994b-94c2-43c5-bf44-62c43382fbcb-registry-certificates\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.675928 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6407994b-94c2-43c5-bf44-62c43382fbcb-bound-sa-token\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.675954 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6407994b-94c2-43c5-bf44-62c43382fbcb-trusted-ca\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.675973 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6407994b-94c2-43c5-bf44-62c43382fbcb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.676940 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6407994b-94c2-43c5-bf44-62c43382fbcb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.677334 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6407994b-94c2-43c5-bf44-62c43382fbcb-trusted-ca\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.677399 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6407994b-94c2-43c5-bf44-62c43382fbcb-registry-tls\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.678620 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6407994b-94c2-43c5-bf44-62c43382fbcb-registry-certificates\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.683688 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6407994b-94c2-43c5-bf44-62c43382fbcb-registry-tls\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.686953 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6407994b-94c2-43c5-bf44-62c43382fbcb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.698303 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6407994b-94c2-43c5-bf44-62c43382fbcb-bound-sa-token\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.704134 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5v54\" (UniqueName: \"kubernetes.io/projected/6407994b-94c2-43c5-bf44-62c43382fbcb-kube-api-access-z5v54\") pod \"image-registry-66df7c8f76-dvf8r\" (UID: \"6407994b-94c2-43c5-bf44-62c43382fbcb\") " pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:41 crc kubenswrapper[4877]: I1211 18:07:41.813913 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:42 crc kubenswrapper[4877]: I1211 18:07:42.242124 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dvf8r"] Dec 11 18:07:42 crc kubenswrapper[4877]: W1211 18:07:42.251122 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6407994b_94c2_43c5_bf44_62c43382fbcb.slice/crio-aafdd610231898ebeabfc2beab1f4edd1809c2bc7cfcd4e57d7952ed076ddfe5 WatchSource:0}: Error finding container aafdd610231898ebeabfc2beab1f4edd1809c2bc7cfcd4e57d7952ed076ddfe5: Status 404 returned error can't find the container with id aafdd610231898ebeabfc2beab1f4edd1809c2bc7cfcd4e57d7952ed076ddfe5 Dec 11 18:07:42 crc kubenswrapper[4877]: I1211 18:07:42.314669 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" event={"ID":"6407994b-94c2-43c5-bf44-62c43382fbcb","Type":"ContainerStarted","Data":"aafdd610231898ebeabfc2beab1f4edd1809c2bc7cfcd4e57d7952ed076ddfe5"} Dec 11 18:07:43 crc kubenswrapper[4877]: I1211 18:07:43.323013 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" event={"ID":"6407994b-94c2-43c5-bf44-62c43382fbcb","Type":"ContainerStarted","Data":"490792b1c44323cd967777a07e73410a999c91fad78d1d7f10a73ceca0b2d6de"} Dec 11 18:07:43 crc kubenswrapper[4877]: I1211 18:07:43.324580 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:07:43 crc kubenswrapper[4877]: I1211 18:07:43.375094 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" podStartSLOduration=2.375058011 podStartE2EDuration="2.375058011s" podCreationTimestamp="2025-12-11 18:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:07:43.371615692 +0000 UTC m=+424.397859776" watchObservedRunningTime="2025-12-11 18:07:43.375058011 +0000 UTC m=+424.401302105" Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.083774 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84765f478-pf2d5"] Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.084775 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" podUID="9654560d-0488-4a57-a1ad-dfda774c82e2" containerName="controller-manager" containerID="cri-o://8a79d44f11b7997f6bb664c716927bf96470713a95ae27122f81cade030eac98" gracePeriod=30 Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.337571 4877 generic.go:334] "Generic (PLEG): container finished" podID="9654560d-0488-4a57-a1ad-dfda774c82e2" containerID="8a79d44f11b7997f6bb664c716927bf96470713a95ae27122f81cade030eac98" exitCode=0 Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.337668 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" event={"ID":"9654560d-0488-4a57-a1ad-dfda774c82e2","Type":"ContainerDied","Data":"8a79d44f11b7997f6bb664c716927bf96470713a95ae27122f81cade030eac98"} Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.690545 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.775028 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9654560d-0488-4a57-a1ad-dfda774c82e2-serving-cert\") pod \"9654560d-0488-4a57-a1ad-dfda774c82e2\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.775117 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-client-ca\") pod \"9654560d-0488-4a57-a1ad-dfda774c82e2\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.775149 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-config\") pod \"9654560d-0488-4a57-a1ad-dfda774c82e2\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.775197 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wmmz\" (UniqueName: \"kubernetes.io/projected/9654560d-0488-4a57-a1ad-dfda774c82e2-kube-api-access-8wmmz\") pod \"9654560d-0488-4a57-a1ad-dfda774c82e2\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.775236 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-proxy-ca-bundles\") pod \"9654560d-0488-4a57-a1ad-dfda774c82e2\" (UID: \"9654560d-0488-4a57-a1ad-dfda774c82e2\") " Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.776343 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-client-ca" (OuterVolumeSpecName: "client-ca") pod "9654560d-0488-4a57-a1ad-dfda774c82e2" (UID: "9654560d-0488-4a57-a1ad-dfda774c82e2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.776436 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9654560d-0488-4a57-a1ad-dfda774c82e2" (UID: "9654560d-0488-4a57-a1ad-dfda774c82e2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.776595 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-config" (OuterVolumeSpecName: "config") pod "9654560d-0488-4a57-a1ad-dfda774c82e2" (UID: "9654560d-0488-4a57-a1ad-dfda774c82e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.783550 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9654560d-0488-4a57-a1ad-dfda774c82e2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9654560d-0488-4a57-a1ad-dfda774c82e2" (UID: "9654560d-0488-4a57-a1ad-dfda774c82e2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.783962 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9654560d-0488-4a57-a1ad-dfda774c82e2-kube-api-access-8wmmz" (OuterVolumeSpecName: "kube-api-access-8wmmz") pod "9654560d-0488-4a57-a1ad-dfda774c82e2" (UID: "9654560d-0488-4a57-a1ad-dfda774c82e2"). InnerVolumeSpecName "kube-api-access-8wmmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.876098 4877 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.876141 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.876152 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wmmz\" (UniqueName: \"kubernetes.io/projected/9654560d-0488-4a57-a1ad-dfda774c82e2-kube-api-access-8wmmz\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.876164 4877 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9654560d-0488-4a57-a1ad-dfda774c82e2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:45 crc kubenswrapper[4877]: I1211 18:07:45.876173 4877 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9654560d-0488-4a57-a1ad-dfda774c82e2-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:07:46 crc kubenswrapper[4877]: I1211 18:07:46.347192 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" event={"ID":"9654560d-0488-4a57-a1ad-dfda774c82e2","Type":"ContainerDied","Data":"2252f594c8180bd1aa24c4556abcbef0c009b128614735fc862c495cc0ba890b"} Dec 11 18:07:46 crc kubenswrapper[4877]: I1211 18:07:46.347261 4877 scope.go:117] "RemoveContainer" containerID="8a79d44f11b7997f6bb664c716927bf96470713a95ae27122f81cade030eac98" Dec 11 18:07:46 crc kubenswrapper[4877]: I1211 18:07:46.347282 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84765f478-pf2d5" Dec 11 18:07:46 crc kubenswrapper[4877]: I1211 18:07:46.392273 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84765f478-pf2d5"] Dec 11 18:07:46 crc kubenswrapper[4877]: I1211 18:07:46.398502 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84765f478-pf2d5"] Dec 11 18:07:46 crc kubenswrapper[4877]: I1211 18:07:46.637822 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:07:46 crc kubenswrapper[4877]: I1211 18:07:46.637948 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:07:46 crc kubenswrapper[4877]: I1211 18:07:46.638047 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:07:46 crc kubenswrapper[4877]: I1211 18:07:46.639258 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf2325d1939acbcb7dd926a92a87748b56e9bd52fe77230dcd2e8429b4a7a42c"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 18:07:46 crc kubenswrapper[4877]: I1211 18:07:46.639469 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://cf2325d1939acbcb7dd926a92a87748b56e9bd52fe77230dcd2e8429b4a7a42c" gracePeriod=600 Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.030905 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55bb7f7494-hxq2g"] Dec 11 18:07:47 crc kubenswrapper[4877]: E1211 18:07:47.031177 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9654560d-0488-4a57-a1ad-dfda774c82e2" containerName="controller-manager" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.031193 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="9654560d-0488-4a57-a1ad-dfda774c82e2" containerName="controller-manager" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.031287 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="9654560d-0488-4a57-a1ad-dfda774c82e2" containerName="controller-manager" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.031836 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.035275 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.035363 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.035609 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.035789 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.036048 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.037213 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.047169 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.048820 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55bb7f7494-hxq2g"] Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.198064 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/866f7cec-1b57-4aeb-9d39-0fdccfa77233-proxy-ca-bundles\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.198131 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/866f7cec-1b57-4aeb-9d39-0fdccfa77233-client-ca\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.198162 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/866f7cec-1b57-4aeb-9d39-0fdccfa77233-config\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.198252 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bfgw\" (UniqueName: \"kubernetes.io/projected/866f7cec-1b57-4aeb-9d39-0fdccfa77233-kube-api-access-9bfgw\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.198284 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/866f7cec-1b57-4aeb-9d39-0fdccfa77233-serving-cert\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.223053 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9654560d-0488-4a57-a1ad-dfda774c82e2" path="/var/lib/kubelet/pods/9654560d-0488-4a57-a1ad-dfda774c82e2/volumes" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.299450 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bfgw\" (UniqueName: \"kubernetes.io/projected/866f7cec-1b57-4aeb-9d39-0fdccfa77233-kube-api-access-9bfgw\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.299507 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/866f7cec-1b57-4aeb-9d39-0fdccfa77233-serving-cert\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.299575 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/866f7cec-1b57-4aeb-9d39-0fdccfa77233-proxy-ca-bundles\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.299615 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/866f7cec-1b57-4aeb-9d39-0fdccfa77233-client-ca\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.299645 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/866f7cec-1b57-4aeb-9d39-0fdccfa77233-config\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.301038 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/866f7cec-1b57-4aeb-9d39-0fdccfa77233-config\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.304652 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/866f7cec-1b57-4aeb-9d39-0fdccfa77233-proxy-ca-bundles\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.305159 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/866f7cec-1b57-4aeb-9d39-0fdccfa77233-client-ca\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.306936 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/866f7cec-1b57-4aeb-9d39-0fdccfa77233-serving-cert\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.334026 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bfgw\" (UniqueName: \"kubernetes.io/projected/866f7cec-1b57-4aeb-9d39-0fdccfa77233-kube-api-access-9bfgw\") pod \"controller-manager-55bb7f7494-hxq2g\" (UID: \"866f7cec-1b57-4aeb-9d39-0fdccfa77233\") " pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.354082 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="cf2325d1939acbcb7dd926a92a87748b56e9bd52fe77230dcd2e8429b4a7a42c" exitCode=0 Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.354163 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"cf2325d1939acbcb7dd926a92a87748b56e9bd52fe77230dcd2e8429b4a7a42c"} Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.354206 4877 scope.go:117] "RemoveContainer" containerID="a07ae20aed67a86df28f5ef6eaac55f59cd4cb9c0ce21453260a0a8489d5a05a" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.359685 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:47 crc kubenswrapper[4877]: I1211 18:07:47.581255 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55bb7f7494-hxq2g"] Dec 11 18:07:47 crc kubenswrapper[4877]: W1211 18:07:47.592211 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod866f7cec_1b57_4aeb_9d39_0fdccfa77233.slice/crio-389bed834c144c6c5b9bf0898325bf4233ade1e18cb4f41d8c0677f849aa52da WatchSource:0}: Error finding container 389bed834c144c6c5b9bf0898325bf4233ade1e18cb4f41d8c0677f849aa52da: Status 404 returned error can't find the container with id 389bed834c144c6c5b9bf0898325bf4233ade1e18cb4f41d8c0677f849aa52da Dec 11 18:07:48 crc kubenswrapper[4877]: I1211 18:07:48.365478 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"1816f7724e069a16fa107148c0cd6974775d923c16f991a70e728e08891f0bb9"} Dec 11 18:07:48 crc kubenswrapper[4877]: I1211 18:07:48.367475 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" event={"ID":"866f7cec-1b57-4aeb-9d39-0fdccfa77233","Type":"ContainerStarted","Data":"83c798892b3f29822147cceaa4e146d9256003d60543152201105db6bed11f4a"} Dec 11 18:07:48 crc kubenswrapper[4877]: I1211 18:07:48.367509 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" event={"ID":"866f7cec-1b57-4aeb-9d39-0fdccfa77233","Type":"ContainerStarted","Data":"389bed834c144c6c5b9bf0898325bf4233ade1e18cb4f41d8c0677f849aa52da"} Dec 11 18:07:48 crc kubenswrapper[4877]: I1211 18:07:48.367778 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:48 crc kubenswrapper[4877]: I1211 18:07:48.372712 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" Dec 11 18:07:48 crc kubenswrapper[4877]: I1211 18:07:48.405572 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55bb7f7494-hxq2g" podStartSLOduration=3.40554176 podStartE2EDuration="3.40554176s" podCreationTimestamp="2025-12-11 18:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:07:48.40343744 +0000 UTC m=+429.429681494" watchObservedRunningTime="2025-12-11 18:07:48.40554176 +0000 UTC m=+429.431785814" Dec 11 18:08:01 crc kubenswrapper[4877]: I1211 18:08:01.825026 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-dvf8r" Dec 11 18:08:01 crc kubenswrapper[4877]: I1211 18:08:01.894210 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-89rk4"] Dec 11 18:08:26 crc kubenswrapper[4877]: I1211 18:08:26.931589 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" podUID="0fff3932-5b5f-49af-a652-9030dd8f6139" containerName="registry" containerID="cri-o://d454fa03cf296c4f9ba280daf06707a9d91965c41ed1cb0ff979e7e1833b6917" gracePeriod=30 Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.387693 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.479178 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fff3932-5b5f-49af-a652-9030dd8f6139-trusted-ca\") pod \"0fff3932-5b5f-49af-a652-9030dd8f6139\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.479264 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fff3932-5b5f-49af-a652-9030dd8f6139-registry-certificates\") pod \"0fff3932-5b5f-49af-a652-9030dd8f6139\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.479331 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgrbw\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-kube-api-access-vgrbw\") pod \"0fff3932-5b5f-49af-a652-9030dd8f6139\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.479392 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-bound-sa-token\") pod \"0fff3932-5b5f-49af-a652-9030dd8f6139\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.479437 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fff3932-5b5f-49af-a652-9030dd8f6139-installation-pull-secrets\") pod \"0fff3932-5b5f-49af-a652-9030dd8f6139\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.479461 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fff3932-5b5f-49af-a652-9030dd8f6139-ca-trust-extracted\") pod \"0fff3932-5b5f-49af-a652-9030dd8f6139\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.479609 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"0fff3932-5b5f-49af-a652-9030dd8f6139\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.479666 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-registry-tls\") pod \"0fff3932-5b5f-49af-a652-9030dd8f6139\" (UID: \"0fff3932-5b5f-49af-a652-9030dd8f6139\") " Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.480470 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fff3932-5b5f-49af-a652-9030dd8f6139-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0fff3932-5b5f-49af-a652-9030dd8f6139" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.480623 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fff3932-5b5f-49af-a652-9030dd8f6139-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0fff3932-5b5f-49af-a652-9030dd8f6139" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.487653 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0fff3932-5b5f-49af-a652-9030dd8f6139" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.488724 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-kube-api-access-vgrbw" (OuterVolumeSpecName: "kube-api-access-vgrbw") pod "0fff3932-5b5f-49af-a652-9030dd8f6139" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139"). InnerVolumeSpecName "kube-api-access-vgrbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.489898 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0fff3932-5b5f-49af-a652-9030dd8f6139" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.499778 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "0fff3932-5b5f-49af-a652-9030dd8f6139" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.503303 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fff3932-5b5f-49af-a652-9030dd8f6139-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0fff3932-5b5f-49af-a652-9030dd8f6139" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.514627 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fff3932-5b5f-49af-a652-9030dd8f6139-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0fff3932-5b5f-49af-a652-9030dd8f6139" (UID: "0fff3932-5b5f-49af-a652-9030dd8f6139"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.581174 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgrbw\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-kube-api-access-vgrbw\") on node \"crc\" DevicePath \"\"" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.581233 4877 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.581244 4877 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fff3932-5b5f-49af-a652-9030dd8f6139-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.581253 4877 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fff3932-5b5f-49af-a652-9030dd8f6139-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.581266 4877 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fff3932-5b5f-49af-a652-9030dd8f6139-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.581275 4877 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fff3932-5b5f-49af-a652-9030dd8f6139-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.581284 4877 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fff3932-5b5f-49af-a652-9030dd8f6139-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.615335 4877 generic.go:334] "Generic (PLEG): container finished" podID="0fff3932-5b5f-49af-a652-9030dd8f6139" containerID="d454fa03cf296c4f9ba280daf06707a9d91965c41ed1cb0ff979e7e1833b6917" exitCode=0 Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.615480 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.615456 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" event={"ID":"0fff3932-5b5f-49af-a652-9030dd8f6139","Type":"ContainerDied","Data":"d454fa03cf296c4f9ba280daf06707a9d91965c41ed1cb0ff979e7e1833b6917"} Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.615930 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-89rk4" event={"ID":"0fff3932-5b5f-49af-a652-9030dd8f6139","Type":"ContainerDied","Data":"5d642c7aec73ab6c8a252e75b4e09163a238a19104336a6ea9f07d99d5efad7b"} Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.615963 4877 scope.go:117] "RemoveContainer" containerID="d454fa03cf296c4f9ba280daf06707a9d91965c41ed1cb0ff979e7e1833b6917" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.641580 4877 scope.go:117] "RemoveContainer" containerID="d454fa03cf296c4f9ba280daf06707a9d91965c41ed1cb0ff979e7e1833b6917" Dec 11 18:08:27 crc kubenswrapper[4877]: E1211 18:08:27.642407 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d454fa03cf296c4f9ba280daf06707a9d91965c41ed1cb0ff979e7e1833b6917\": container with ID starting with d454fa03cf296c4f9ba280daf06707a9d91965c41ed1cb0ff979e7e1833b6917 not found: ID does not exist" containerID="d454fa03cf296c4f9ba280daf06707a9d91965c41ed1cb0ff979e7e1833b6917" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.642736 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d454fa03cf296c4f9ba280daf06707a9d91965c41ed1cb0ff979e7e1833b6917"} err="failed to get container status \"d454fa03cf296c4f9ba280daf06707a9d91965c41ed1cb0ff979e7e1833b6917\": rpc error: code = NotFound desc = could not find container \"d454fa03cf296c4f9ba280daf06707a9d91965c41ed1cb0ff979e7e1833b6917\": container with ID starting with d454fa03cf296c4f9ba280daf06707a9d91965c41ed1cb0ff979e7e1833b6917 not found: ID does not exist" Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.662668 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-89rk4"] Dec 11 18:08:27 crc kubenswrapper[4877]: I1211 18:08:27.667749 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-89rk4"] Dec 11 18:08:29 crc kubenswrapper[4877]: I1211 18:08:29.231598 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fff3932-5b5f-49af-a652-9030dd8f6139" path="/var/lib/kubelet/pods/0fff3932-5b5f-49af-a652-9030dd8f6139/volumes" Dec 11 18:10:16 crc kubenswrapper[4877]: I1211 18:10:16.638289 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:10:16 crc kubenswrapper[4877]: I1211 18:10:16.638863 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:10:46 crc kubenswrapper[4877]: I1211 18:10:46.638015 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:10:46 crc kubenswrapper[4877]: I1211 18:10:46.638810 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.286099 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-s4sld"] Dec 11 18:10:48 crc kubenswrapper[4877]: E1211 18:10:48.286900 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fff3932-5b5f-49af-a652-9030dd8f6139" containerName="registry" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.286916 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fff3932-5b5f-49af-a652-9030dd8f6139" containerName="registry" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.287029 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fff3932-5b5f-49af-a652-9030dd8f6139" containerName="registry" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.287508 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-s4sld" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.292351 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8kkd\" (UniqueName: \"kubernetes.io/projected/57038dbe-549d-4b29-b24a-4d32261c3a50-kube-api-access-w8kkd\") pod \"cert-manager-cainjector-7f985d654d-s4sld\" (UID: \"57038dbe-549d-4b29-b24a-4d32261c3a50\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-s4sld" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.292842 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.292724 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.294159 4877 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-frppl" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.304542 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-fsjfl"] Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.305646 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-fsjfl" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.310921 4877 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-wtvf4" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.313197 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-s4sld"] Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.320259 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-qzspd"] Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.321433 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-qzspd" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.324342 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-fsjfl"] Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.324796 4877 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-frqdr" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.331468 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-qzspd"] Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.394432 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq2h6\" (UniqueName: \"kubernetes.io/projected/1212481e-8248-4bfe-903d-b6b08b87ead6-kube-api-access-lq2h6\") pod \"cert-manager-5b446d88c5-fsjfl\" (UID: \"1212481e-8248-4bfe-903d-b6b08b87ead6\") " pod="cert-manager/cert-manager-5b446d88c5-fsjfl" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.394500 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8kkd\" (UniqueName: \"kubernetes.io/projected/57038dbe-549d-4b29-b24a-4d32261c3a50-kube-api-access-w8kkd\") pod \"cert-manager-cainjector-7f985d654d-s4sld\" (UID: \"57038dbe-549d-4b29-b24a-4d32261c3a50\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-s4sld" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.394533 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twwjd\" (UniqueName: \"kubernetes.io/projected/a89aaf35-60f1-481e-9f2f-4bcf0f70cec7-kube-api-access-twwjd\") pod \"cert-manager-webhook-5655c58dd6-qzspd\" (UID: \"a89aaf35-60f1-481e-9f2f-4bcf0f70cec7\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-qzspd" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.416241 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8kkd\" (UniqueName: \"kubernetes.io/projected/57038dbe-549d-4b29-b24a-4d32261c3a50-kube-api-access-w8kkd\") pod \"cert-manager-cainjector-7f985d654d-s4sld\" (UID: \"57038dbe-549d-4b29-b24a-4d32261c3a50\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-s4sld" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.496207 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq2h6\" (UniqueName: \"kubernetes.io/projected/1212481e-8248-4bfe-903d-b6b08b87ead6-kube-api-access-lq2h6\") pod \"cert-manager-5b446d88c5-fsjfl\" (UID: \"1212481e-8248-4bfe-903d-b6b08b87ead6\") " pod="cert-manager/cert-manager-5b446d88c5-fsjfl" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.496295 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twwjd\" (UniqueName: \"kubernetes.io/projected/a89aaf35-60f1-481e-9f2f-4bcf0f70cec7-kube-api-access-twwjd\") pod \"cert-manager-webhook-5655c58dd6-qzspd\" (UID: \"a89aaf35-60f1-481e-9f2f-4bcf0f70cec7\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-qzspd" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.518778 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq2h6\" (UniqueName: \"kubernetes.io/projected/1212481e-8248-4bfe-903d-b6b08b87ead6-kube-api-access-lq2h6\") pod \"cert-manager-5b446d88c5-fsjfl\" (UID: \"1212481e-8248-4bfe-903d-b6b08b87ead6\") " pod="cert-manager/cert-manager-5b446d88c5-fsjfl" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.519680 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twwjd\" (UniqueName: \"kubernetes.io/projected/a89aaf35-60f1-481e-9f2f-4bcf0f70cec7-kube-api-access-twwjd\") pod \"cert-manager-webhook-5655c58dd6-qzspd\" (UID: \"a89aaf35-60f1-481e-9f2f-4bcf0f70cec7\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-qzspd" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.623425 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-s4sld" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.632475 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-fsjfl" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.640458 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-qzspd" Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.869294 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-s4sld"] Dec 11 18:10:48 crc kubenswrapper[4877]: I1211 18:10:48.875708 4877 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 18:10:49 crc kubenswrapper[4877]: I1211 18:10:49.121237 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-fsjfl"] Dec 11 18:10:49 crc kubenswrapper[4877]: I1211 18:10:49.124675 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-qzspd"] Dec 11 18:10:49 crc kubenswrapper[4877]: I1211 18:10:49.727543 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-qzspd" event={"ID":"a89aaf35-60f1-481e-9f2f-4bcf0f70cec7","Type":"ContainerStarted","Data":"08db680db64da8a4edc3824e24f825cd13d090bd997bb62f976ff09dc92c2768"} Dec 11 18:10:49 crc kubenswrapper[4877]: I1211 18:10:49.728627 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-s4sld" event={"ID":"57038dbe-549d-4b29-b24a-4d32261c3a50","Type":"ContainerStarted","Data":"361148fda6bbd27ce6ab3e5d126b39e150b9e232cdfbc2d0fed5f19e0a51f38b"} Dec 11 18:10:49 crc kubenswrapper[4877]: I1211 18:10:49.729642 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-fsjfl" event={"ID":"1212481e-8248-4bfe-903d-b6b08b87ead6","Type":"ContainerStarted","Data":"0c1477702b26ad8165ea4b7fb4a86cd5ef7961f49515ab641e55a5cb7a1b2383"} Dec 11 18:10:52 crc kubenswrapper[4877]: I1211 18:10:52.754150 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-fsjfl" event={"ID":"1212481e-8248-4bfe-903d-b6b08b87ead6","Type":"ContainerStarted","Data":"bef7b1644c59f96a51d0adb47dd516b67eb756bfb480b8ce82c8abfab08e19f4"} Dec 11 18:10:52 crc kubenswrapper[4877]: I1211 18:10:52.755906 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-qzspd" event={"ID":"a89aaf35-60f1-481e-9f2f-4bcf0f70cec7","Type":"ContainerStarted","Data":"4a4c15cd026c26e372fc00efa966d5eaf6cb5ad576f3491d2595dea3de2c8c4a"} Dec 11 18:10:52 crc kubenswrapper[4877]: I1211 18:10:52.756053 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-qzspd" Dec 11 18:10:52 crc kubenswrapper[4877]: I1211 18:10:52.757404 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-s4sld" event={"ID":"57038dbe-549d-4b29-b24a-4d32261c3a50","Type":"ContainerStarted","Data":"76cad1a0dd8fcbd2db83412228e987ba39035eece13c12700ea966170d0a27c7"} Dec 11 18:10:52 crc kubenswrapper[4877]: I1211 18:10:52.777261 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-fsjfl" podStartSLOduration=2.196140563 podStartE2EDuration="4.777237922s" podCreationTimestamp="2025-12-11 18:10:48 +0000 UTC" firstStartedPulling="2025-12-11 18:10:49.133100234 +0000 UTC m=+610.159344288" lastFinishedPulling="2025-12-11 18:10:51.714197603 +0000 UTC m=+612.740441647" observedRunningTime="2025-12-11 18:10:52.774948279 +0000 UTC m=+613.801192333" watchObservedRunningTime="2025-12-11 18:10:52.777237922 +0000 UTC m=+613.803481976" Dec 11 18:10:52 crc kubenswrapper[4877]: I1211 18:10:52.817597 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-s4sld" podStartSLOduration=2.054132661 podStartE2EDuration="4.817571357s" podCreationTimestamp="2025-12-11 18:10:48 +0000 UTC" firstStartedPulling="2025-12-11 18:10:48.875314042 +0000 UTC m=+609.901558096" lastFinishedPulling="2025-12-11 18:10:51.638752748 +0000 UTC m=+612.664996792" observedRunningTime="2025-12-11 18:10:52.812067835 +0000 UTC m=+613.838311899" watchObservedRunningTime="2025-12-11 18:10:52.817571357 +0000 UTC m=+613.843815411" Dec 11 18:10:52 crc kubenswrapper[4877]: I1211 18:10:52.828491 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-qzspd" podStartSLOduration=1.502806757 podStartE2EDuration="4.828458407s" podCreationTimestamp="2025-12-11 18:10:48 +0000 UTC" firstStartedPulling="2025-12-11 18:10:49.1337075 +0000 UTC m=+610.159951544" lastFinishedPulling="2025-12-11 18:10:52.45935914 +0000 UTC m=+613.485603194" observedRunningTime="2025-12-11 18:10:52.827692156 +0000 UTC m=+613.853936230" watchObservedRunningTime="2025-12-11 18:10:52.828458407 +0000 UTC m=+613.854702471" Dec 11 18:10:57 crc kubenswrapper[4877]: I1211 18:10:57.785173 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qvb5p"] Dec 11 18:10:57 crc kubenswrapper[4877]: I1211 18:10:57.786955 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovn-controller" containerID="cri-o://f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2" gracePeriod=30 Dec 11 18:10:57 crc kubenswrapper[4877]: I1211 18:10:57.787015 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="nbdb" containerID="cri-o://a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f" gracePeriod=30 Dec 11 18:10:57 crc kubenswrapper[4877]: I1211 18:10:57.787096 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="kube-rbac-proxy-node" containerID="cri-o://ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9" gracePeriod=30 Dec 11 18:10:57 crc kubenswrapper[4877]: I1211 18:10:57.787085 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b" gracePeriod=30 Dec 11 18:10:57 crc kubenswrapper[4877]: I1211 18:10:57.787139 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovn-acl-logging" containerID="cri-o://21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9" gracePeriod=30 Dec 11 18:10:57 crc kubenswrapper[4877]: I1211 18:10:57.787346 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="northd" containerID="cri-o://f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967" gracePeriod=30 Dec 11 18:10:57 crc kubenswrapper[4877]: I1211 18:10:57.787458 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="sbdb" containerID="cri-o://39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d" gracePeriod=30 Dec 11 18:10:57 crc kubenswrapper[4877]: I1211 18:10:57.845302 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" containerID="cri-o://8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d" gracePeriod=30 Dec 11 18:10:57 crc kubenswrapper[4877]: E1211 18:10:57.926212 4877 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d is running failed: container process not found" containerID="8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 11 18:10:57 crc kubenswrapper[4877]: E1211 18:10:57.926722 4877 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d is running failed: container process not found" containerID="8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 11 18:10:57 crc kubenswrapper[4877]: E1211 18:10:57.927114 4877 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d is running failed: container process not found" containerID="8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 11 18:10:57 crc kubenswrapper[4877]: E1211 18:10:57.927158 4877 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.141367 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/3.log" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.144335 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovn-acl-logging/0.log" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.144944 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovn-controller/0.log" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.145446 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.224763 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-js84z"] Dec 11 18:10:58 crc kubenswrapper[4877]: E1211 18:10:58.225115 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovn-acl-logging" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225146 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovn-acl-logging" Dec 11 18:10:58 crc kubenswrapper[4877]: E1211 18:10:58.225164 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225175 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: E1211 18:10:58.225189 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="kubecfg-setup" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225205 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="kubecfg-setup" Dec 11 18:10:58 crc kubenswrapper[4877]: E1211 18:10:58.225221 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="kube-rbac-proxy-node" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225233 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="kube-rbac-proxy-node" Dec 11 18:10:58 crc kubenswrapper[4877]: E1211 18:10:58.225246 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225257 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 18:10:58 crc kubenswrapper[4877]: E1211 18:10:58.225275 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225286 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: E1211 18:10:58.225299 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="northd" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225310 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="northd" Dec 11 18:10:58 crc kubenswrapper[4877]: E1211 18:10:58.225324 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="nbdb" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225337 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="nbdb" Dec 11 18:10:58 crc kubenswrapper[4877]: E1211 18:10:58.225357 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225392 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: E1211 18:10:58.225411 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="sbdb" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225422 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="sbdb" Dec 11 18:10:58 crc kubenswrapper[4877]: E1211 18:10:58.225443 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225454 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: E1211 18:10:58.225467 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovn-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225477 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovn-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225690 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225712 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="nbdb" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225728 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovn-acl-logging" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225741 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="kube-rbac-proxy-node" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225760 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225771 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225782 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovn-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225794 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="northd" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225812 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225833 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="sbdb" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.225843 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: E1211 18:10:58.226032 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.226047 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.226215 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerName="ovnkube-controller" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.229213 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.236580 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-run-openvswitch\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.236661 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-node-log\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.236690 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-run-ovn-kubernetes\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.236741 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-run-systemd\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.236763 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-run-ovn\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.236798 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-ovnkube-config\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.236834 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-ovnkube-script-lib\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.236866 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9hkp\" (UniqueName: \"kubernetes.io/projected/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-kube-api-access-f9hkp\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.236895 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-log-socket\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.236928 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-systemd-units\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.236961 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-var-lib-openvswitch\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.237016 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.237051 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-kubelet\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.237266 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-env-overrides\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.237308 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-cni-netd\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.237819 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-ovn-node-metrics-cert\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.237890 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-run-netns\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.237957 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-cni-bin\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.241221 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-etc-openvswitch\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.241251 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-slash\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342169 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-ovn\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342214 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-kubelet\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342263 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342311 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-env-overrides\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342342 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-run-netns\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342392 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-systemd-units\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342417 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-slash\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342436 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-cni-netd\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342454 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-var-lib-openvswitch\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342473 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-systemd\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342512 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-openvswitch\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342528 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-node-log\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342548 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-cni-bin\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342578 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-log-socket\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342605 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-etc-openvswitch\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342631 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-run-ovn-kubernetes\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342649 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-ovnkube-script-lib\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342674 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvnsg\" (UniqueName: \"kubernetes.io/projected/ea4114b7-a44c-4220-a321-9f18bbb90151-kube-api-access-dvnsg\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342727 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-ovnkube-config\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342750 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea4114b7-a44c-4220-a321-9f18bbb90151-ovn-node-metrics-cert\") pod \"ea4114b7-a44c-4220-a321-9f18bbb90151\" (UID: \"ea4114b7-a44c-4220-a321-9f18bbb90151\") " Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342964 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-run-openvswitch\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.342999 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-node-log\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343021 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-run-ovn-kubernetes\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343048 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-run-systemd\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343064 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-run-ovn\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343087 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-ovnkube-config\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343118 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-ovnkube-script-lib\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343138 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9hkp\" (UniqueName: \"kubernetes.io/projected/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-kube-api-access-f9hkp\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343154 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-log-socket\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343177 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-systemd-units\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343198 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-var-lib-openvswitch\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343222 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343241 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-kubelet\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343256 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-env-overrides\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343278 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-cni-netd\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343301 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-ovn-node-metrics-cert\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343321 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-run-netns\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343343 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-cni-bin\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343365 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-etc-openvswitch\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343397 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-slash\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343477 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-slash\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343541 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343567 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343590 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.343998 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.344035 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.344054 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.344072 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-slash" (OuterVolumeSpecName: "host-slash") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.344089 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.344111 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.344898 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-run-ovn\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.344932 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345052 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345105 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-node-log" (OuterVolumeSpecName: "node-log") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345137 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345157 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-log-socket\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345169 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-log-socket" (OuterVolumeSpecName: "log-socket") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345170 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345207 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-run-ovn-kubernetes\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345250 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-node-log\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345287 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-run-systemd\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345716 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345742 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-run-openvswitch\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345808 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345838 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-cni-bin\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345902 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-etc-openvswitch\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.345935 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-run-netns\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.346006 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-kubelet\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.346043 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-systemd-units\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.346049 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-ovnkube-script-lib\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.346075 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-host-cni-netd\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.346100 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-var-lib-openvswitch\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.346161 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.346184 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-ovnkube-config\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.346656 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-env-overrides\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.352046 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea4114b7-a44c-4220-a321-9f18bbb90151-kube-api-access-dvnsg" (OuterVolumeSpecName: "kube-api-access-dvnsg") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "kube-api-access-dvnsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.352628 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-ovn-node-metrics-cert\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.352858 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4114b7-a44c-4220-a321-9f18bbb90151-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.360034 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ea4114b7-a44c-4220-a321-9f18bbb90151" (UID: "ea4114b7-a44c-4220-a321-9f18bbb90151"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.364947 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9hkp\" (UniqueName: \"kubernetes.io/projected/679cfb1a-f6be-48b1-86f8-30db8bc9f50d-kube-api-access-f9hkp\") pod \"ovnkube-node-js84z\" (UID: \"679cfb1a-f6be-48b1-86f8-30db8bc9f50d\") " pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.444932 4877 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.444996 4877 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-node-log\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445010 4877 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445020 4877 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-log-socket\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445034 4877 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445049 4877 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445064 4877 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445075 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvnsg\" (UniqueName: \"kubernetes.io/projected/ea4114b7-a44c-4220-a321-9f18bbb90151-kube-api-access-dvnsg\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445086 4877 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445099 4877 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea4114b7-a44c-4220-a321-9f18bbb90151-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445110 4877 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445121 4877 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445132 4877 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445144 4877 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea4114b7-a44c-4220-a321-9f18bbb90151-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445155 4877 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445166 4877 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445176 4877 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-slash\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445186 4877 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445196 4877 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.445206 4877 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea4114b7-a44c-4220-a321-9f18bbb90151-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.545656 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.644693 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-qzspd" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.808634 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovnkube-controller/3.log" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.811491 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovn-acl-logging/0.log" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.814173 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvb5p_ea4114b7-a44c-4220-a321-9f18bbb90151/ovn-controller/0.log" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816041 4877 generic.go:334] "Generic (PLEG): container finished" podID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerID="8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d" exitCode=0 Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816102 4877 generic.go:334] "Generic (PLEG): container finished" podID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerID="39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d" exitCode=0 Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816118 4877 generic.go:334] "Generic (PLEG): container finished" podID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerID="a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f" exitCode=0 Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816136 4877 generic.go:334] "Generic (PLEG): container finished" podID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerID="f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967" exitCode=0 Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816149 4877 generic.go:334] "Generic (PLEG): container finished" podID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerID="0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b" exitCode=0 Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816129 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerDied","Data":"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816240 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816253 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerDied","Data":"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816279 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerDied","Data":"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816294 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerDied","Data":"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816311 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerDied","Data":"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816332 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerDied","Data":"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816162 4877 generic.go:334] "Generic (PLEG): container finished" podID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerID="ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9" exitCode=0 Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816313 4877 scope.go:117] "RemoveContainer" containerID="8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816416 4877 generic.go:334] "Generic (PLEG): container finished" podID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerID="21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9" exitCode=143 Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816455 4877 generic.go:334] "Generic (PLEG): container finished" podID="ea4114b7-a44c-4220-a321-9f18bbb90151" containerID="f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2" exitCode=143 Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816349 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816548 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816589 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816611 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816623 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816634 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816646 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816659 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816671 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816755 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerDied","Data":"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816792 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816826 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816837 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816848 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816859 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816870 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816880 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816892 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816903 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816914 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816931 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerDied","Data":"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816953 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816966 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816978 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.816989 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.817001 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818489 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818511 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818539 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818549 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818562 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818583 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvb5p" event={"ID":"ea4114b7-a44c-4220-a321-9f18bbb90151","Type":"ContainerDied","Data":"879cba78f4464cca9871cfed9888654595bf2d321b37cea1f337481c25bfa93f"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818607 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818622 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818632 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818641 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818651 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818659 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818668 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818679 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818687 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.818695 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.821185 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwfnt_61afe7d0-ec5b-41aa-a8fb-6628b863a59c/kube-multus/2.log" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.821819 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwfnt_61afe7d0-ec5b-41aa-a8fb-6628b863a59c/kube-multus/1.log" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.821884 4877 generic.go:334] "Generic (PLEG): container finished" podID="61afe7d0-ec5b-41aa-a8fb-6628b863a59c" containerID="276da851410f19ec952a15ae96df11dd281e8aa6fd8e73b1987309da94e602f0" exitCode=2 Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.821988 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwfnt" event={"ID":"61afe7d0-ec5b-41aa-a8fb-6628b863a59c","Type":"ContainerDied","Data":"276da851410f19ec952a15ae96df11dd281e8aa6fd8e73b1987309da94e602f0"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.822043 4877 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a9c929cdad0d9629dbf6fc2c1fbd5a05440834612e6acd541b19af37d0378f6"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.822710 4877 scope.go:117] "RemoveContainer" containerID="276da851410f19ec952a15ae96df11dd281e8aa6fd8e73b1987309da94e602f0" Dec 11 18:10:58 crc kubenswrapper[4877]: E1211 18:10:58.822934 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gwfnt_openshift-multus(61afe7d0-ec5b-41aa-a8fb-6628b863a59c)\"" pod="openshift-multus/multus-gwfnt" podUID="61afe7d0-ec5b-41aa-a8fb-6628b863a59c" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.826589 4877 generic.go:334] "Generic (PLEG): container finished" podID="679cfb1a-f6be-48b1-86f8-30db8bc9f50d" containerID="aab638524aa2285f33d5a0bd9cba41970abc90c0e189375e38174a0f444465b4" exitCode=0 Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.826633 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" event={"ID":"679cfb1a-f6be-48b1-86f8-30db8bc9f50d","Type":"ContainerDied","Data":"aab638524aa2285f33d5a0bd9cba41970abc90c0e189375e38174a0f444465b4"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.826663 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" event={"ID":"679cfb1a-f6be-48b1-86f8-30db8bc9f50d","Type":"ContainerStarted","Data":"0d5c2349ef4cb31076b6eebfe7bd4acc489029ddc58be2c9db3d56b7105c69a2"} Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.847441 4877 scope.go:117] "RemoveContainer" containerID="d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.877588 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qvb5p"] Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.881114 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qvb5p"] Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.881704 4877 scope.go:117] "RemoveContainer" containerID="39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.897905 4877 scope.go:117] "RemoveContainer" containerID="a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.958649 4877 scope.go:117] "RemoveContainer" containerID="f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.982636 4877 scope.go:117] "RemoveContainer" containerID="0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b" Dec 11 18:10:58 crc kubenswrapper[4877]: I1211 18:10:58.996590 4877 scope.go:117] "RemoveContainer" containerID="ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.013624 4877 scope.go:117] "RemoveContainer" containerID="21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.027360 4877 scope.go:117] "RemoveContainer" containerID="f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.048725 4877 scope.go:117] "RemoveContainer" containerID="1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.070620 4877 scope.go:117] "RemoveContainer" containerID="8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d" Dec 11 18:10:59 crc kubenswrapper[4877]: E1211 18:10:59.071326 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d\": container with ID starting with 8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d not found: ID does not exist" containerID="8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.071361 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d"} err="failed to get container status \"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d\": rpc error: code = NotFound desc = could not find container \"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d\": container with ID starting with 8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.071404 4877 scope.go:117] "RemoveContainer" containerID="d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c" Dec 11 18:10:59 crc kubenswrapper[4877]: E1211 18:10:59.071813 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\": container with ID starting with d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c not found: ID does not exist" containerID="d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.071847 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c"} err="failed to get container status \"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\": rpc error: code = NotFound desc = could not find container \"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\": container with ID starting with d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.071864 4877 scope.go:117] "RemoveContainer" containerID="39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d" Dec 11 18:10:59 crc kubenswrapper[4877]: E1211 18:10:59.072283 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\": container with ID starting with 39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d not found: ID does not exist" containerID="39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.072306 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d"} err="failed to get container status \"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\": rpc error: code = NotFound desc = could not find container \"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\": container with ID starting with 39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.072321 4877 scope.go:117] "RemoveContainer" containerID="a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f" Dec 11 18:10:59 crc kubenswrapper[4877]: E1211 18:10:59.072815 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\": container with ID starting with a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f not found: ID does not exist" containerID="a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.072867 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f"} err="failed to get container status \"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\": rpc error: code = NotFound desc = could not find container \"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\": container with ID starting with a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.072906 4877 scope.go:117] "RemoveContainer" containerID="f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967" Dec 11 18:10:59 crc kubenswrapper[4877]: E1211 18:10:59.073385 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\": container with ID starting with f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967 not found: ID does not exist" containerID="f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.073411 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967"} err="failed to get container status \"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\": rpc error: code = NotFound desc = could not find container \"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\": container with ID starting with f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.073428 4877 scope.go:117] "RemoveContainer" containerID="0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b" Dec 11 18:10:59 crc kubenswrapper[4877]: E1211 18:10:59.073696 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\": container with ID starting with 0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b not found: ID does not exist" containerID="0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.073715 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b"} err="failed to get container status \"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\": rpc error: code = NotFound desc = could not find container \"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\": container with ID starting with 0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.073735 4877 scope.go:117] "RemoveContainer" containerID="ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9" Dec 11 18:10:59 crc kubenswrapper[4877]: E1211 18:10:59.074321 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\": container with ID starting with ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9 not found: ID does not exist" containerID="ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.074346 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9"} err="failed to get container status \"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\": rpc error: code = NotFound desc = could not find container \"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\": container with ID starting with ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.074359 4877 scope.go:117] "RemoveContainer" containerID="21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9" Dec 11 18:10:59 crc kubenswrapper[4877]: E1211 18:10:59.074602 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\": container with ID starting with 21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9 not found: ID does not exist" containerID="21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.074620 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9"} err="failed to get container status \"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\": rpc error: code = NotFound desc = could not find container \"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\": container with ID starting with 21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.074633 4877 scope.go:117] "RemoveContainer" containerID="f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2" Dec 11 18:10:59 crc kubenswrapper[4877]: E1211 18:10:59.075046 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\": container with ID starting with f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2 not found: ID does not exist" containerID="f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.075065 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2"} err="failed to get container status \"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\": rpc error: code = NotFound desc = could not find container \"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\": container with ID starting with f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.075080 4877 scope.go:117] "RemoveContainer" containerID="1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90" Dec 11 18:10:59 crc kubenswrapper[4877]: E1211 18:10:59.075477 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\": container with ID starting with 1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90 not found: ID does not exist" containerID="1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.075494 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90"} err="failed to get container status \"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\": rpc error: code = NotFound desc = could not find container \"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\": container with ID starting with 1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.075508 4877 scope.go:117] "RemoveContainer" containerID="8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.075806 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d"} err="failed to get container status \"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d\": rpc error: code = NotFound desc = could not find container \"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d\": container with ID starting with 8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.075827 4877 scope.go:117] "RemoveContainer" containerID="d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.076122 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c"} err="failed to get container status \"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\": rpc error: code = NotFound desc = could not find container \"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\": container with ID starting with d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.076140 4877 scope.go:117] "RemoveContainer" containerID="39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.076415 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d"} err="failed to get container status \"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\": rpc error: code = NotFound desc = could not find container \"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\": container with ID starting with 39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.076434 4877 scope.go:117] "RemoveContainer" containerID="a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.076781 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f"} err="failed to get container status \"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\": rpc error: code = NotFound desc = could not find container \"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\": container with ID starting with a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.076796 4877 scope.go:117] "RemoveContainer" containerID="f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.076992 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967"} err="failed to get container status \"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\": rpc error: code = NotFound desc = could not find container \"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\": container with ID starting with f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.077008 4877 scope.go:117] "RemoveContainer" containerID="0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.077412 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b"} err="failed to get container status \"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\": rpc error: code = NotFound desc = could not find container \"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\": container with ID starting with 0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.077428 4877 scope.go:117] "RemoveContainer" containerID="ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.077623 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9"} err="failed to get container status \"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\": rpc error: code = NotFound desc = could not find container \"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\": container with ID starting with ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.077639 4877 scope.go:117] "RemoveContainer" containerID="21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.077874 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9"} err="failed to get container status \"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\": rpc error: code = NotFound desc = could not find container \"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\": container with ID starting with 21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.077890 4877 scope.go:117] "RemoveContainer" containerID="f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.078151 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2"} err="failed to get container status \"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\": rpc error: code = NotFound desc = could not find container \"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\": container with ID starting with f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.078170 4877 scope.go:117] "RemoveContainer" containerID="1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.078409 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90"} err="failed to get container status \"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\": rpc error: code = NotFound desc = could not find container \"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\": container with ID starting with 1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.078425 4877 scope.go:117] "RemoveContainer" containerID="8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.078667 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d"} err="failed to get container status \"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d\": rpc error: code = NotFound desc = could not find container \"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d\": container with ID starting with 8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.078691 4877 scope.go:117] "RemoveContainer" containerID="d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.078901 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c"} err="failed to get container status \"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\": rpc error: code = NotFound desc = could not find container \"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\": container with ID starting with d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.078919 4877 scope.go:117] "RemoveContainer" containerID="39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.079147 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d"} err="failed to get container status \"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\": rpc error: code = NotFound desc = could not find container \"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\": container with ID starting with 39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.079164 4877 scope.go:117] "RemoveContainer" containerID="a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.079417 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f"} err="failed to get container status \"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\": rpc error: code = NotFound desc = could not find container \"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\": container with ID starting with a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.079436 4877 scope.go:117] "RemoveContainer" containerID="f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.079698 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967"} err="failed to get container status \"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\": rpc error: code = NotFound desc = could not find container \"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\": container with ID starting with f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.079716 4877 scope.go:117] "RemoveContainer" containerID="0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.079989 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b"} err="failed to get container status \"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\": rpc error: code = NotFound desc = could not find container \"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\": container with ID starting with 0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.080006 4877 scope.go:117] "RemoveContainer" containerID="ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.080399 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9"} err="failed to get container status \"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\": rpc error: code = NotFound desc = could not find container \"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\": container with ID starting with ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.080452 4877 scope.go:117] "RemoveContainer" containerID="21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.080781 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9"} err="failed to get container status \"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\": rpc error: code = NotFound desc = could not find container \"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\": container with ID starting with 21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.080804 4877 scope.go:117] "RemoveContainer" containerID="f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.081131 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2"} err="failed to get container status \"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\": rpc error: code = NotFound desc = could not find container \"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\": container with ID starting with f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.081162 4877 scope.go:117] "RemoveContainer" containerID="1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.081500 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90"} err="failed to get container status \"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\": rpc error: code = NotFound desc = could not find container \"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\": container with ID starting with 1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.081521 4877 scope.go:117] "RemoveContainer" containerID="8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.081863 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d"} err="failed to get container status \"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d\": rpc error: code = NotFound desc = could not find container \"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d\": container with ID starting with 8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.081903 4877 scope.go:117] "RemoveContainer" containerID="d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.082193 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c"} err="failed to get container status \"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\": rpc error: code = NotFound desc = could not find container \"d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c\": container with ID starting with d9ba9490b122e217005e885fe5e8c36196010d5243f44ba8cc725ec797a6f01c not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.082232 4877 scope.go:117] "RemoveContainer" containerID="39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.082545 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d"} err="failed to get container status \"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\": rpc error: code = NotFound desc = could not find container \"39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d\": container with ID starting with 39f16cf89dafa56002344f94ee4ed844d11b8737214fc1a9937e2f4044c7522d not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.082569 4877 scope.go:117] "RemoveContainer" containerID="a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.082824 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f"} err="failed to get container status \"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\": rpc error: code = NotFound desc = could not find container \"a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f\": container with ID starting with a1a816ae2bf621414a0c4cbfd477dbee3296035fed45c0fbab6f23f4d708216f not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.082843 4877 scope.go:117] "RemoveContainer" containerID="f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.083186 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967"} err="failed to get container status \"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\": rpc error: code = NotFound desc = could not find container \"f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967\": container with ID starting with f3baafc6de02e3e0bfe0986357c568c114c1301ec5d0a247baa61bc0ba29a967 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.083250 4877 scope.go:117] "RemoveContainer" containerID="0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.083682 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b"} err="failed to get container status \"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\": rpc error: code = NotFound desc = could not find container \"0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b\": container with ID starting with 0c4715ebb86dbc5237c404bc7d2897dbd5054edcd7ef400da81323a8b6cbd70b not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.083713 4877 scope.go:117] "RemoveContainer" containerID="ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.084087 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9"} err="failed to get container status \"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\": rpc error: code = NotFound desc = could not find container \"ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9\": container with ID starting with ea222049f06b06fd1124f2b4b142e4898ce8bfc8032cf2120efd03d52b3ea5d9 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.084129 4877 scope.go:117] "RemoveContainer" containerID="21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.084511 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9"} err="failed to get container status \"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\": rpc error: code = NotFound desc = could not find container \"21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9\": container with ID starting with 21cbad1329d2b42126becb7f790984c612f6c96eb7ee600ed98ab6062e62c8e9 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.084540 4877 scope.go:117] "RemoveContainer" containerID="f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.084920 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2"} err="failed to get container status \"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\": rpc error: code = NotFound desc = could not find container \"f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2\": container with ID starting with f9ab05d176bd3ad33ba56fb728c00cff90196157200d7dfc49cc7a22ea81c9b2 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.084945 4877 scope.go:117] "RemoveContainer" containerID="1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.085275 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90"} err="failed to get container status \"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\": rpc error: code = NotFound desc = could not find container \"1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90\": container with ID starting with 1e7d1df5f78a9707e1b943dad9baf2448372a8442a663489faad1e16d3377b90 not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.085298 4877 scope.go:117] "RemoveContainer" containerID="8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.085666 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d"} err="failed to get container status \"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d\": rpc error: code = NotFound desc = could not find container \"8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d\": container with ID starting with 8a1f6535a0030a58abcfd52ddcee0b3ddb61f38bdb4b5e5f141605ffbe7afb5d not found: ID does not exist" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.224612 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea4114b7-a44c-4220-a321-9f18bbb90151" path="/var/lib/kubelet/pods/ea4114b7-a44c-4220-a321-9f18bbb90151/volumes" Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.841179 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" event={"ID":"679cfb1a-f6be-48b1-86f8-30db8bc9f50d","Type":"ContainerStarted","Data":"12cd64a2feb026ca55e6a350e158e082905ac31af27eb55991cc8c80d29cad79"} Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.841239 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" event={"ID":"679cfb1a-f6be-48b1-86f8-30db8bc9f50d","Type":"ContainerStarted","Data":"a7c3188888aaeae9957c6dec69e982d9acdfd99e127a4e727a8190eada8cf6fe"} Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.841256 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" event={"ID":"679cfb1a-f6be-48b1-86f8-30db8bc9f50d","Type":"ContainerStarted","Data":"6675fd275b3b709df86eb862f930a5cbe8583493f7d28f73d0ce54a5c46c5210"} Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.841268 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" event={"ID":"679cfb1a-f6be-48b1-86f8-30db8bc9f50d","Type":"ContainerStarted","Data":"6c98d136c354634c963119844260c04a05233c3c3316a8cd7376585947486efa"} Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.841280 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" event={"ID":"679cfb1a-f6be-48b1-86f8-30db8bc9f50d","Type":"ContainerStarted","Data":"4a1ef25af0876593b406dee7af0a83fd206148b879a16b3a0c51e2a591f6c23f"} Dec 11 18:10:59 crc kubenswrapper[4877]: I1211 18:10:59.841290 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" event={"ID":"679cfb1a-f6be-48b1-86f8-30db8bc9f50d","Type":"ContainerStarted","Data":"04781655d79872cfa8c8121bec4cd4fc7c04eedf192d55fec0be3fdee0e03416"} Dec 11 18:11:02 crc kubenswrapper[4877]: I1211 18:11:02.868058 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" event={"ID":"679cfb1a-f6be-48b1-86f8-30db8bc9f50d","Type":"ContainerStarted","Data":"ceb780fabb23c938d8b8e218b50f3c104ead5a33554ebdb9678923eab853fa90"} Dec 11 18:11:04 crc kubenswrapper[4877]: I1211 18:11:04.883749 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" event={"ID":"679cfb1a-f6be-48b1-86f8-30db8bc9f50d","Type":"ContainerStarted","Data":"63acb5e04a9a3220b03a2fbb1e832606ae81c2a2d60510a9ef0db618b43c8c26"} Dec 11 18:11:04 crc kubenswrapper[4877]: I1211 18:11:04.884287 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:11:04 crc kubenswrapper[4877]: I1211 18:11:04.884320 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:11:04 crc kubenswrapper[4877]: I1211 18:11:04.884329 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:11:04 crc kubenswrapper[4877]: I1211 18:11:04.918308 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:11:04 crc kubenswrapper[4877]: I1211 18:11:04.920340 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:11:04 crc kubenswrapper[4877]: I1211 18:11:04.926165 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" podStartSLOduration=6.926138806 podStartE2EDuration="6.926138806s" podCreationTimestamp="2025-12-11 18:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:11:04.925354974 +0000 UTC m=+625.951599028" watchObservedRunningTime="2025-12-11 18:11:04.926138806 +0000 UTC m=+625.952382850" Dec 11 18:11:09 crc kubenswrapper[4877]: I1211 18:11:09.217877 4877 scope.go:117] "RemoveContainer" containerID="276da851410f19ec952a15ae96df11dd281e8aa6fd8e73b1987309da94e602f0" Dec 11 18:11:09 crc kubenswrapper[4877]: E1211 18:11:09.219003 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gwfnt_openshift-multus(61afe7d0-ec5b-41aa-a8fb-6628b863a59c)\"" pod="openshift-multus/multus-gwfnt" podUID="61afe7d0-ec5b-41aa-a8fb-6628b863a59c" Dec 11 18:11:16 crc kubenswrapper[4877]: I1211 18:11:16.638326 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:11:16 crc kubenswrapper[4877]: I1211 18:11:16.638735 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:11:16 crc kubenswrapper[4877]: I1211 18:11:16.638783 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:11:16 crc kubenswrapper[4877]: I1211 18:11:16.639457 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1816f7724e069a16fa107148c0cd6974775d923c16f991a70e728e08891f0bb9"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 18:11:16 crc kubenswrapper[4877]: I1211 18:11:16.639513 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://1816f7724e069a16fa107148c0cd6974775d923c16f991a70e728e08891f0bb9" gracePeriod=600 Dec 11 18:11:16 crc kubenswrapper[4877]: I1211 18:11:16.973359 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="1816f7724e069a16fa107148c0cd6974775d923c16f991a70e728e08891f0bb9" exitCode=0 Dec 11 18:11:16 crc kubenswrapper[4877]: I1211 18:11:16.973427 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"1816f7724e069a16fa107148c0cd6974775d923c16f991a70e728e08891f0bb9"} Dec 11 18:11:16 crc kubenswrapper[4877]: I1211 18:11:16.973985 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"a27d4e9d33f15b85376f195565e3ec7c0836b3706bd7f69f9b9e7b666aacdfa3"} Dec 11 18:11:16 crc kubenswrapper[4877]: I1211 18:11:16.974041 4877 scope.go:117] "RemoveContainer" containerID="cf2325d1939acbcb7dd926a92a87748b56e9bd52fe77230dcd2e8429b4a7a42c" Dec 11 18:11:22 crc kubenswrapper[4877]: I1211 18:11:22.215573 4877 scope.go:117] "RemoveContainer" containerID="276da851410f19ec952a15ae96df11dd281e8aa6fd8e73b1987309da94e602f0" Dec 11 18:11:23 crc kubenswrapper[4877]: I1211 18:11:23.021938 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwfnt_61afe7d0-ec5b-41aa-a8fb-6628b863a59c/kube-multus/2.log" Dec 11 18:11:23 crc kubenswrapper[4877]: I1211 18:11:23.023566 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwfnt_61afe7d0-ec5b-41aa-a8fb-6628b863a59c/kube-multus/1.log" Dec 11 18:11:23 crc kubenswrapper[4877]: I1211 18:11:23.023663 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwfnt" event={"ID":"61afe7d0-ec5b-41aa-a8fb-6628b863a59c","Type":"ContainerStarted","Data":"c6d8271f049d7bdd71e2152ea11068d3ae3475f8ed8111a79f72f926fba47416"} Dec 11 18:11:28 crc kubenswrapper[4877]: I1211 18:11:28.580939 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-js84z" Dec 11 18:11:39 crc kubenswrapper[4877]: I1211 18:11:39.512843 4877 scope.go:117] "RemoveContainer" containerID="9a9c929cdad0d9629dbf6fc2c1fbd5a05440834612e6acd541b19af37d0378f6" Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.018690 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5"] Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.020302 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.022567 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.041470 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5"] Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.054919 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3671635-4e9b-4c74-85a5-98480f49249a-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5\" (UID: \"a3671635-4e9b-4c74-85a5-98480f49249a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.055087 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h24h\" (UniqueName: \"kubernetes.io/projected/a3671635-4e9b-4c74-85a5-98480f49249a-kube-api-access-7h24h\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5\" (UID: \"a3671635-4e9b-4c74-85a5-98480f49249a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.055140 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3671635-4e9b-4c74-85a5-98480f49249a-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5\" (UID: \"a3671635-4e9b-4c74-85a5-98480f49249a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.143104 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwfnt_61afe7d0-ec5b-41aa-a8fb-6628b863a59c/kube-multus/2.log" Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.156336 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3671635-4e9b-4c74-85a5-98480f49249a-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5\" (UID: \"a3671635-4e9b-4c74-85a5-98480f49249a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.156463 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3671635-4e9b-4c74-85a5-98480f49249a-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5\" (UID: \"a3671635-4e9b-4c74-85a5-98480f49249a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.156509 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h24h\" (UniqueName: \"kubernetes.io/projected/a3671635-4e9b-4c74-85a5-98480f49249a-kube-api-access-7h24h\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5\" (UID: \"a3671635-4e9b-4c74-85a5-98480f49249a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.156894 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3671635-4e9b-4c74-85a5-98480f49249a-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5\" (UID: \"a3671635-4e9b-4c74-85a5-98480f49249a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.157226 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3671635-4e9b-4c74-85a5-98480f49249a-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5\" (UID: \"a3671635-4e9b-4c74-85a5-98480f49249a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.177946 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h24h\" (UniqueName: \"kubernetes.io/projected/a3671635-4e9b-4c74-85a5-98480f49249a-kube-api-access-7h24h\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5\" (UID: \"a3671635-4e9b-4c74-85a5-98480f49249a\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.338222 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" Dec 11 18:11:40 crc kubenswrapper[4877]: I1211 18:11:40.742171 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5"] Dec 11 18:11:41 crc kubenswrapper[4877]: I1211 18:11:41.153683 4877 generic.go:334] "Generic (PLEG): container finished" podID="a3671635-4e9b-4c74-85a5-98480f49249a" containerID="2c63e957f819fbefd61e8836ff562de9dff2a0e057c7c7ed1f0cfb89507fd09c" exitCode=0 Dec 11 18:11:41 crc kubenswrapper[4877]: I1211 18:11:41.153788 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" event={"ID":"a3671635-4e9b-4c74-85a5-98480f49249a","Type":"ContainerDied","Data":"2c63e957f819fbefd61e8836ff562de9dff2a0e057c7c7ed1f0cfb89507fd09c"} Dec 11 18:11:41 crc kubenswrapper[4877]: I1211 18:11:41.154117 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" event={"ID":"a3671635-4e9b-4c74-85a5-98480f49249a","Type":"ContainerStarted","Data":"4632da35bfc573ecf0b8f985ec18a21e883bfb4f949bc90af2364e53509d16ce"} Dec 11 18:11:43 crc kubenswrapper[4877]: I1211 18:11:43.170237 4877 generic.go:334] "Generic (PLEG): container finished" podID="a3671635-4e9b-4c74-85a5-98480f49249a" containerID="c05a45ed05f5ff5b2e2fdafd6e3cf4e93b8c9d3c42f29e429750dd0f8b432f59" exitCode=0 Dec 11 18:11:43 crc kubenswrapper[4877]: I1211 18:11:43.170350 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" event={"ID":"a3671635-4e9b-4c74-85a5-98480f49249a","Type":"ContainerDied","Data":"c05a45ed05f5ff5b2e2fdafd6e3cf4e93b8c9d3c42f29e429750dd0f8b432f59"} Dec 11 18:11:44 crc kubenswrapper[4877]: I1211 18:11:44.184779 4877 generic.go:334] "Generic (PLEG): container finished" podID="a3671635-4e9b-4c74-85a5-98480f49249a" containerID="9a4949a163d63cbf57a254228caaee5559c5142fb251fbeb145c20aa42a34c6e" exitCode=0 Dec 11 18:11:44 crc kubenswrapper[4877]: I1211 18:11:44.184837 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" event={"ID":"a3671635-4e9b-4c74-85a5-98480f49249a","Type":"ContainerDied","Data":"9a4949a163d63cbf57a254228caaee5559c5142fb251fbeb145c20aa42a34c6e"} Dec 11 18:11:45 crc kubenswrapper[4877]: I1211 18:11:45.444997 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" Dec 11 18:11:45 crc kubenswrapper[4877]: I1211 18:11:45.537650 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3671635-4e9b-4c74-85a5-98480f49249a-util\") pod \"a3671635-4e9b-4c74-85a5-98480f49249a\" (UID: \"a3671635-4e9b-4c74-85a5-98480f49249a\") " Dec 11 18:11:45 crc kubenswrapper[4877]: I1211 18:11:45.537807 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h24h\" (UniqueName: \"kubernetes.io/projected/a3671635-4e9b-4c74-85a5-98480f49249a-kube-api-access-7h24h\") pod \"a3671635-4e9b-4c74-85a5-98480f49249a\" (UID: \"a3671635-4e9b-4c74-85a5-98480f49249a\") " Dec 11 18:11:45 crc kubenswrapper[4877]: I1211 18:11:45.537858 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3671635-4e9b-4c74-85a5-98480f49249a-bundle\") pod \"a3671635-4e9b-4c74-85a5-98480f49249a\" (UID: \"a3671635-4e9b-4c74-85a5-98480f49249a\") " Dec 11 18:11:45 crc kubenswrapper[4877]: I1211 18:11:45.539071 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3671635-4e9b-4c74-85a5-98480f49249a-bundle" (OuterVolumeSpecName: "bundle") pod "a3671635-4e9b-4c74-85a5-98480f49249a" (UID: "a3671635-4e9b-4c74-85a5-98480f49249a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:11:45 crc kubenswrapper[4877]: I1211 18:11:45.546477 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3671635-4e9b-4c74-85a5-98480f49249a-kube-api-access-7h24h" (OuterVolumeSpecName: "kube-api-access-7h24h") pod "a3671635-4e9b-4c74-85a5-98480f49249a" (UID: "a3671635-4e9b-4c74-85a5-98480f49249a"). InnerVolumeSpecName "kube-api-access-7h24h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:11:45 crc kubenswrapper[4877]: I1211 18:11:45.565834 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3671635-4e9b-4c74-85a5-98480f49249a-util" (OuterVolumeSpecName: "util") pod "a3671635-4e9b-4c74-85a5-98480f49249a" (UID: "a3671635-4e9b-4c74-85a5-98480f49249a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:11:45 crc kubenswrapper[4877]: I1211 18:11:45.639031 4877 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3671635-4e9b-4c74-85a5-98480f49249a-util\") on node \"crc\" DevicePath \"\"" Dec 11 18:11:45 crc kubenswrapper[4877]: I1211 18:11:45.639080 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h24h\" (UniqueName: \"kubernetes.io/projected/a3671635-4e9b-4c74-85a5-98480f49249a-kube-api-access-7h24h\") on node \"crc\" DevicePath \"\"" Dec 11 18:11:45 crc kubenswrapper[4877]: I1211 18:11:45.639100 4877 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3671635-4e9b-4c74-85a5-98480f49249a-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:11:46 crc kubenswrapper[4877]: I1211 18:11:46.203198 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" event={"ID":"a3671635-4e9b-4c74-85a5-98480f49249a","Type":"ContainerDied","Data":"4632da35bfc573ecf0b8f985ec18a21e883bfb4f949bc90af2364e53509d16ce"} Dec 11 18:11:46 crc kubenswrapper[4877]: I1211 18:11:46.203252 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4632da35bfc573ecf0b8f985ec18a21e883bfb4f949bc90af2364e53509d16ce" Dec 11 18:11:46 crc kubenswrapper[4877]: I1211 18:11:46.203305 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5" Dec 11 18:11:51 crc kubenswrapper[4877]: I1211 18:11:51.625313 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-kkxhc"] Dec 11 18:11:51 crc kubenswrapper[4877]: E1211 18:11:51.626214 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3671635-4e9b-4c74-85a5-98480f49249a" containerName="pull" Dec 11 18:11:51 crc kubenswrapper[4877]: I1211 18:11:51.626232 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3671635-4e9b-4c74-85a5-98480f49249a" containerName="pull" Dec 11 18:11:51 crc kubenswrapper[4877]: E1211 18:11:51.626255 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3671635-4e9b-4c74-85a5-98480f49249a" containerName="util" Dec 11 18:11:51 crc kubenswrapper[4877]: I1211 18:11:51.626262 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3671635-4e9b-4c74-85a5-98480f49249a" containerName="util" Dec 11 18:11:51 crc kubenswrapper[4877]: E1211 18:11:51.626274 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3671635-4e9b-4c74-85a5-98480f49249a" containerName="extract" Dec 11 18:11:51 crc kubenswrapper[4877]: I1211 18:11:51.626283 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3671635-4e9b-4c74-85a5-98480f49249a" containerName="extract" Dec 11 18:11:51 crc kubenswrapper[4877]: I1211 18:11:51.626445 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3671635-4e9b-4c74-85a5-98480f49249a" containerName="extract" Dec 11 18:11:51 crc kubenswrapper[4877]: I1211 18:11:51.626967 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-kkxhc" Dec 11 18:11:51 crc kubenswrapper[4877]: I1211 18:11:51.629527 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 11 18:11:51 crc kubenswrapper[4877]: I1211 18:11:51.629707 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 11 18:11:51 crc kubenswrapper[4877]: I1211 18:11:51.629915 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-gjn8x" Dec 11 18:11:51 crc kubenswrapper[4877]: I1211 18:11:51.646597 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-kkxhc"] Dec 11 18:11:51 crc kubenswrapper[4877]: I1211 18:11:51.818833 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ht6c\" (UniqueName: \"kubernetes.io/projected/e132a5e4-7ab3-4161-bc40-3d20fc57dab7-kube-api-access-2ht6c\") pod \"nmstate-operator-6769fb99d-kkxhc\" (UID: \"e132a5e4-7ab3-4161-bc40-3d20fc57dab7\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-kkxhc" Dec 11 18:11:51 crc kubenswrapper[4877]: I1211 18:11:51.920610 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ht6c\" (UniqueName: \"kubernetes.io/projected/e132a5e4-7ab3-4161-bc40-3d20fc57dab7-kube-api-access-2ht6c\") pod \"nmstate-operator-6769fb99d-kkxhc\" (UID: \"e132a5e4-7ab3-4161-bc40-3d20fc57dab7\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-kkxhc" Dec 11 18:11:51 crc kubenswrapper[4877]: I1211 18:11:51.950414 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ht6c\" (UniqueName: \"kubernetes.io/projected/e132a5e4-7ab3-4161-bc40-3d20fc57dab7-kube-api-access-2ht6c\") pod \"nmstate-operator-6769fb99d-kkxhc\" (UID: \"e132a5e4-7ab3-4161-bc40-3d20fc57dab7\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-kkxhc" Dec 11 18:11:52 crc kubenswrapper[4877]: I1211 18:11:52.248351 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-kkxhc" Dec 11 18:11:52 crc kubenswrapper[4877]: I1211 18:11:52.478787 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-kkxhc"] Dec 11 18:11:53 crc kubenswrapper[4877]: I1211 18:11:53.254223 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-kkxhc" event={"ID":"e132a5e4-7ab3-4161-bc40-3d20fc57dab7","Type":"ContainerStarted","Data":"78e268dd4e30fab6fc1f3fc8f1afd3962d57c34a7587c96a5a78e6d446ca0093"} Dec 11 18:11:55 crc kubenswrapper[4877]: I1211 18:11:55.269446 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-kkxhc" event={"ID":"e132a5e4-7ab3-4161-bc40-3d20fc57dab7","Type":"ContainerStarted","Data":"6cbbba7455b5912f40aabb47cc77b636cae5d87f9ab20cca6f4f3909a53278be"} Dec 11 18:11:55 crc kubenswrapper[4877]: I1211 18:11:55.294595 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-kkxhc" podStartSLOduration=1.9138826839999998 podStartE2EDuration="4.294561646s" podCreationTimestamp="2025-12-11 18:11:51 +0000 UTC" firstStartedPulling="2025-12-11 18:11:52.485675554 +0000 UTC m=+673.511919598" lastFinishedPulling="2025-12-11 18:11:54.866354516 +0000 UTC m=+675.892598560" observedRunningTime="2025-12-11 18:11:55.287918123 +0000 UTC m=+676.314162207" watchObservedRunningTime="2025-12-11 18:11:55.294561646 +0000 UTC m=+676.320805790" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.371094 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-g7dnb"] Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.372557 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-g7dnb" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.378943 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk"] Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.379776 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-5p7x6" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.380416 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.385223 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.397732 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk"] Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.406566 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-g7dnb"] Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.406638 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-229td"] Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.407455 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.534237 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls"] Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.535203 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.537874 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.539116 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.540472 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-4mbhs" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.544779 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt4ks\" (UniqueName: \"kubernetes.io/projected/a304c336-7461-4570-a515-4ae4c7d2cebd-kube-api-access-xt4ks\") pod \"nmstate-handler-229td\" (UID: \"a304c336-7461-4570-a515-4ae4c7d2cebd\") " pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.544826 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a304c336-7461-4570-a515-4ae4c7d2cebd-ovs-socket\") pod \"nmstate-handler-229td\" (UID: \"a304c336-7461-4570-a515-4ae4c7d2cebd\") " pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.544857 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a304c336-7461-4570-a515-4ae4c7d2cebd-dbus-socket\") pod \"nmstate-handler-229td\" (UID: \"a304c336-7461-4570-a515-4ae4c7d2cebd\") " pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.544888 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq8v5\" (UniqueName: \"kubernetes.io/projected/51a24aa4-4100-4dee-9b55-72a9c14f4859-kube-api-access-vq8v5\") pod \"nmstate-webhook-f8fb84555-zxfhk\" (UID: \"51a24aa4-4100-4dee-9b55-72a9c14f4859\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.544936 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxjbc\" (UniqueName: \"kubernetes.io/projected/3caa2a2a-a894-44ae-8b4d-8bca5b08d582-kube-api-access-xxjbc\") pod \"nmstate-metrics-7f7f7578db-g7dnb\" (UID: \"3caa2a2a-a894-44ae-8b4d-8bca5b08d582\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-g7dnb" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.544957 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/51a24aa4-4100-4dee-9b55-72a9c14f4859-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-zxfhk\" (UID: \"51a24aa4-4100-4dee-9b55-72a9c14f4859\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.544984 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a304c336-7461-4570-a515-4ae4c7d2cebd-nmstate-lock\") pod \"nmstate-handler-229td\" (UID: \"a304c336-7461-4570-a515-4ae4c7d2cebd\") " pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.553042 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls"] Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.645910 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt4ks\" (UniqueName: \"kubernetes.io/projected/a304c336-7461-4570-a515-4ae4c7d2cebd-kube-api-access-xt4ks\") pod \"nmstate-handler-229td\" (UID: \"a304c336-7461-4570-a515-4ae4c7d2cebd\") " pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.646238 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a304c336-7461-4570-a515-4ae4c7d2cebd-ovs-socket\") pod \"nmstate-handler-229td\" (UID: \"a304c336-7461-4570-a515-4ae4c7d2cebd\") " pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.646297 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a304c336-7461-4570-a515-4ae4c7d2cebd-ovs-socket\") pod \"nmstate-handler-229td\" (UID: \"a304c336-7461-4570-a515-4ae4c7d2cebd\") " pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.646446 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a304c336-7461-4570-a515-4ae4c7d2cebd-dbus-socket\") pod \"nmstate-handler-229td\" (UID: \"a304c336-7461-4570-a515-4ae4c7d2cebd\") " pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.646513 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq8v5\" (UniqueName: \"kubernetes.io/projected/51a24aa4-4100-4dee-9b55-72a9c14f4859-kube-api-access-vq8v5\") pod \"nmstate-webhook-f8fb84555-zxfhk\" (UID: \"51a24aa4-4100-4dee-9b55-72a9c14f4859\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.646643 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxjbc\" (UniqueName: \"kubernetes.io/projected/3caa2a2a-a894-44ae-8b4d-8bca5b08d582-kube-api-access-xxjbc\") pod \"nmstate-metrics-7f7f7578db-g7dnb\" (UID: \"3caa2a2a-a894-44ae-8b4d-8bca5b08d582\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-g7dnb" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.646676 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/51a24aa4-4100-4dee-9b55-72a9c14f4859-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-zxfhk\" (UID: \"51a24aa4-4100-4dee-9b55-72a9c14f4859\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.646733 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1eb218c2-7bcc-411b-90e3-ab813f9739a4-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-89bls\" (UID: \"1eb218c2-7bcc-411b-90e3-ab813f9739a4\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.646771 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a304c336-7461-4570-a515-4ae4c7d2cebd-nmstate-lock\") pod \"nmstate-handler-229td\" (UID: \"a304c336-7461-4570-a515-4ae4c7d2cebd\") " pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.646805 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8rpd\" (UniqueName: \"kubernetes.io/projected/1eb218c2-7bcc-411b-90e3-ab813f9739a4-kube-api-access-h8rpd\") pod \"nmstate-console-plugin-6ff7998486-89bls\" (UID: \"1eb218c2-7bcc-411b-90e3-ab813f9739a4\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.646730 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a304c336-7461-4570-a515-4ae4c7d2cebd-dbus-socket\") pod \"nmstate-handler-229td\" (UID: \"a304c336-7461-4570-a515-4ae4c7d2cebd\") " pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.646866 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a304c336-7461-4570-a515-4ae4c7d2cebd-nmstate-lock\") pod \"nmstate-handler-229td\" (UID: \"a304c336-7461-4570-a515-4ae4c7d2cebd\") " pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.646928 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1eb218c2-7bcc-411b-90e3-ab813f9739a4-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-89bls\" (UID: \"1eb218c2-7bcc-411b-90e3-ab813f9739a4\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.654662 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/51a24aa4-4100-4dee-9b55-72a9c14f4859-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-zxfhk\" (UID: \"51a24aa4-4100-4dee-9b55-72a9c14f4859\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.668907 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt4ks\" (UniqueName: \"kubernetes.io/projected/a304c336-7461-4570-a515-4ae4c7d2cebd-kube-api-access-xt4ks\") pod \"nmstate-handler-229td\" (UID: \"a304c336-7461-4570-a515-4ae4c7d2cebd\") " pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.671159 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq8v5\" (UniqueName: \"kubernetes.io/projected/51a24aa4-4100-4dee-9b55-72a9c14f4859-kube-api-access-vq8v5\") pod \"nmstate-webhook-f8fb84555-zxfhk\" (UID: \"51a24aa4-4100-4dee-9b55-72a9c14f4859\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.676563 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxjbc\" (UniqueName: \"kubernetes.io/projected/3caa2a2a-a894-44ae-8b4d-8bca5b08d582-kube-api-access-xxjbc\") pod \"nmstate-metrics-7f7f7578db-g7dnb\" (UID: \"3caa2a2a-a894-44ae-8b4d-8bca5b08d582\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-g7dnb" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.738084 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-g7dnb" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.747493 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1eb218c2-7bcc-411b-90e3-ab813f9739a4-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-89bls\" (UID: \"1eb218c2-7bcc-411b-90e3-ab813f9739a4\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.747554 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8rpd\" (UniqueName: \"kubernetes.io/projected/1eb218c2-7bcc-411b-90e3-ab813f9739a4-kube-api-access-h8rpd\") pod \"nmstate-console-plugin-6ff7998486-89bls\" (UID: \"1eb218c2-7bcc-411b-90e3-ab813f9739a4\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.747586 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1eb218c2-7bcc-411b-90e3-ab813f9739a4-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-89bls\" (UID: \"1eb218c2-7bcc-411b-90e3-ab813f9739a4\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.747857 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.748847 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1eb218c2-7bcc-411b-90e3-ab813f9739a4-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-89bls\" (UID: \"1eb218c2-7bcc-411b-90e3-ab813f9739a4\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.751243 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1eb218c2-7bcc-411b-90e3-ab813f9739a4-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-89bls\" (UID: \"1eb218c2-7bcc-411b-90e3-ab813f9739a4\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.762266 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.763457 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8576788c9d-bwtmn"] Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.765207 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.783173 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8576788c9d-bwtmn"] Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.783212 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8rpd\" (UniqueName: \"kubernetes.io/projected/1eb218c2-7bcc-411b-90e3-ab813f9739a4-kube-api-access-h8rpd\") pod \"nmstate-console-plugin-6ff7998486-89bls\" (UID: \"1eb218c2-7bcc-411b-90e3-ab813f9739a4\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.855119 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.949408 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c3339969-eca9-4b70-9d3d-a662a34079db-oauth-serving-cert\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.949456 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c3339969-eca9-4b70-9d3d-a662a34079db-console-oauth-config\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.949482 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnv24\" (UniqueName: \"kubernetes.io/projected/c3339969-eca9-4b70-9d3d-a662a34079db-kube-api-access-qnv24\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.949502 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3339969-eca9-4b70-9d3d-a662a34079db-trusted-ca-bundle\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.949530 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3339969-eca9-4b70-9d3d-a662a34079db-console-serving-cert\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.949565 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c3339969-eca9-4b70-9d3d-a662a34079db-console-config\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:00 crc kubenswrapper[4877]: I1211 18:12:00.949595 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c3339969-eca9-4b70-9d3d-a662a34079db-service-ca\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.054692 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c3339969-eca9-4b70-9d3d-a662a34079db-oauth-serving-cert\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.054770 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c3339969-eca9-4b70-9d3d-a662a34079db-console-oauth-config\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.054799 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnv24\" (UniqueName: \"kubernetes.io/projected/c3339969-eca9-4b70-9d3d-a662a34079db-kube-api-access-qnv24\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.054820 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3339969-eca9-4b70-9d3d-a662a34079db-trusted-ca-bundle\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.054857 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3339969-eca9-4b70-9d3d-a662a34079db-console-serving-cert\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.054900 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c3339969-eca9-4b70-9d3d-a662a34079db-console-config\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.054934 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c3339969-eca9-4b70-9d3d-a662a34079db-service-ca\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.056106 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c3339969-eca9-4b70-9d3d-a662a34079db-service-ca\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.056100 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c3339969-eca9-4b70-9d3d-a662a34079db-oauth-serving-cert\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.056928 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c3339969-eca9-4b70-9d3d-a662a34079db-console-config\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.057298 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3339969-eca9-4b70-9d3d-a662a34079db-trusted-ca-bundle\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.070607 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c3339969-eca9-4b70-9d3d-a662a34079db-console-oauth-config\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.070846 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3339969-eca9-4b70-9d3d-a662a34079db-console-serving-cert\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.076949 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnv24\" (UniqueName: \"kubernetes.io/projected/c3339969-eca9-4b70-9d3d-a662a34079db-kube-api-access-qnv24\") pod \"console-8576788c9d-bwtmn\" (UID: \"c3339969-eca9-4b70-9d3d-a662a34079db\") " pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.076972 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk"] Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.109276 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:01 crc kubenswrapper[4877]: W1211 18:12:01.114292 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1eb218c2_7bcc_411b_90e3_ab813f9739a4.slice/crio-2f43616aa9cd1cd84e4a4490f97a7e08b097ddc1b01544f775ad6dd3cce52e7d WatchSource:0}: Error finding container 2f43616aa9cd1cd84e4a4490f97a7e08b097ddc1b01544f775ad6dd3cce52e7d: Status 404 returned error can't find the container with id 2f43616aa9cd1cd84e4a4490f97a7e08b097ddc1b01544f775ad6dd3cce52e7d Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.114888 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls"] Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.193787 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-g7dnb"] Dec 11 18:12:01 crc kubenswrapper[4877]: W1211 18:12:01.196270 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3caa2a2a_a894_44ae_8b4d_8bca5b08d582.slice/crio-9e515520d5f5ce973484779e66b2ce19d4eab141890338be55d19eaecc91ccc5 WatchSource:0}: Error finding container 9e515520d5f5ce973484779e66b2ce19d4eab141890338be55d19eaecc91ccc5: Status 404 returned error can't find the container with id 9e515520d5f5ce973484779e66b2ce19d4eab141890338be55d19eaecc91ccc5 Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.283763 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8576788c9d-bwtmn"] Dec 11 18:12:01 crc kubenswrapper[4877]: W1211 18:12:01.289327 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3339969_eca9_4b70_9d3d_a662a34079db.slice/crio-6202330a51632980398715d2d6fc500bd4c6363d84178990f2a02776a5990d72 WatchSource:0}: Error finding container 6202330a51632980398715d2d6fc500bd4c6363d84178990f2a02776a5990d72: Status 404 returned error can't find the container with id 6202330a51632980398715d2d6fc500bd4c6363d84178990f2a02776a5990d72 Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.307574 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls" event={"ID":"1eb218c2-7bcc-411b-90e3-ab813f9739a4","Type":"ContainerStarted","Data":"2f43616aa9cd1cd84e4a4490f97a7e08b097ddc1b01544f775ad6dd3cce52e7d"} Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.308647 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-g7dnb" event={"ID":"3caa2a2a-a894-44ae-8b4d-8bca5b08d582","Type":"ContainerStarted","Data":"9e515520d5f5ce973484779e66b2ce19d4eab141890338be55d19eaecc91ccc5"} Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.310147 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-229td" event={"ID":"a304c336-7461-4570-a515-4ae4c7d2cebd","Type":"ContainerStarted","Data":"676ff4f3c1293e6faafde5f60b2610eb1d31ea96700a6bdb7a95a4bd38722978"} Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.312765 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8576788c9d-bwtmn" event={"ID":"c3339969-eca9-4b70-9d3d-a662a34079db","Type":"ContainerStarted","Data":"6202330a51632980398715d2d6fc500bd4c6363d84178990f2a02776a5990d72"} Dec 11 18:12:01 crc kubenswrapper[4877]: I1211 18:12:01.313894 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk" event={"ID":"51a24aa4-4100-4dee-9b55-72a9c14f4859","Type":"ContainerStarted","Data":"86b8ca5c3704fc2842b15c8c14e55699a5d399740b752094f87f5433dc302e30"} Dec 11 18:12:02 crc kubenswrapper[4877]: I1211 18:12:02.322140 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8576788c9d-bwtmn" event={"ID":"c3339969-eca9-4b70-9d3d-a662a34079db","Type":"ContainerStarted","Data":"9e805087639228b682330fee5c3b1a6f83f34dece4b693c683c26a02c1bd84c0"} Dec 11 18:12:02 crc kubenswrapper[4877]: I1211 18:12:02.340984 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8576788c9d-bwtmn" podStartSLOduration=2.34096376 podStartE2EDuration="2.34096376s" podCreationTimestamp="2025-12-11 18:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:12:02.339892331 +0000 UTC m=+683.366136415" watchObservedRunningTime="2025-12-11 18:12:02.34096376 +0000 UTC m=+683.367207804" Dec 11 18:12:04 crc kubenswrapper[4877]: I1211 18:12:04.339609 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk" event={"ID":"51a24aa4-4100-4dee-9b55-72a9c14f4859","Type":"ContainerStarted","Data":"7003e4877361e1bc58abddae6c77749750dee5e1808fd636576280fe8777ad02"} Dec 11 18:12:04 crc kubenswrapper[4877]: I1211 18:12:04.340179 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk" Dec 11 18:12:04 crc kubenswrapper[4877]: I1211 18:12:04.342690 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-g7dnb" event={"ID":"3caa2a2a-a894-44ae-8b4d-8bca5b08d582","Type":"ContainerStarted","Data":"9e853ba1413d988f76150bec046c5097c61e1603928b8426ce8055d1b20ffb61"} Dec 11 18:12:04 crc kubenswrapper[4877]: I1211 18:12:04.345634 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-229td" event={"ID":"a304c336-7461-4570-a515-4ae4c7d2cebd","Type":"ContainerStarted","Data":"1e90615777d8cd9534b05d2804f11ca2acd394193a1d74af2f5af3458895b504"} Dec 11 18:12:04 crc kubenswrapper[4877]: I1211 18:12:04.345853 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:04 crc kubenswrapper[4877]: I1211 18:12:04.367030 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk" podStartSLOduration=1.991088795 podStartE2EDuration="4.362714507s" podCreationTimestamp="2025-12-11 18:12:00 +0000 UTC" firstStartedPulling="2025-12-11 18:12:01.077136654 +0000 UTC m=+682.103380698" lastFinishedPulling="2025-12-11 18:12:03.448762366 +0000 UTC m=+684.475006410" observedRunningTime="2025-12-11 18:12:04.36210814 +0000 UTC m=+685.388352204" watchObservedRunningTime="2025-12-11 18:12:04.362714507 +0000 UTC m=+685.388958561" Dec 11 18:12:04 crc kubenswrapper[4877]: I1211 18:12:04.387926 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-229td" podStartSLOduration=1.741183351 podStartE2EDuration="4.387899543s" podCreationTimestamp="2025-12-11 18:12:00 +0000 UTC" firstStartedPulling="2025-12-11 18:12:00.811345131 +0000 UTC m=+681.837589165" lastFinishedPulling="2025-12-11 18:12:03.458061273 +0000 UTC m=+684.484305357" observedRunningTime="2025-12-11 18:12:04.37984561 +0000 UTC m=+685.406089664" watchObservedRunningTime="2025-12-11 18:12:04.387899543 +0000 UTC m=+685.414143647" Dec 11 18:12:08 crc kubenswrapper[4877]: I1211 18:12:08.380271 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-g7dnb" event={"ID":"3caa2a2a-a894-44ae-8b4d-8bca5b08d582","Type":"ContainerStarted","Data":"1729a1ffde096eefd5372c040d7e045a307f1eceecd278ae53d2b0114a4a5fbe"} Dec 11 18:12:08 crc kubenswrapper[4877]: I1211 18:12:08.403830 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-g7dnb" podStartSLOduration=2.188460098 podStartE2EDuration="8.403809353s" podCreationTimestamp="2025-12-11 18:12:00 +0000 UTC" firstStartedPulling="2025-12-11 18:12:01.198862447 +0000 UTC m=+682.225106491" lastFinishedPulling="2025-12-11 18:12:07.414211672 +0000 UTC m=+688.440455746" observedRunningTime="2025-12-11 18:12:08.402001353 +0000 UTC m=+689.428245417" watchObservedRunningTime="2025-12-11 18:12:08.403809353 +0000 UTC m=+689.430053407" Dec 11 18:12:10 crc kubenswrapper[4877]: I1211 18:12:10.800166 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-229td" Dec 11 18:12:11 crc kubenswrapper[4877]: I1211 18:12:11.109888 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:11 crc kubenswrapper[4877]: I1211 18:12:11.110201 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:11 crc kubenswrapper[4877]: I1211 18:12:11.115565 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:11 crc kubenswrapper[4877]: I1211 18:12:11.411957 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls" event={"ID":"1eb218c2-7bcc-411b-90e3-ab813f9739a4","Type":"ContainerStarted","Data":"e31439b7c671e8d374fd885dfa5c75d01839eb427b455167e3446d7a488815b4"} Dec 11 18:12:11 crc kubenswrapper[4877]: I1211 18:12:11.418077 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8576788c9d-bwtmn" Dec 11 18:12:11 crc kubenswrapper[4877]: I1211 18:12:11.436983 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-89bls" podStartSLOduration=1.602728345 podStartE2EDuration="11.436962571s" podCreationTimestamp="2025-12-11 18:12:00 +0000 UTC" firstStartedPulling="2025-12-11 18:12:01.118809286 +0000 UTC m=+682.145053330" lastFinishedPulling="2025-12-11 18:12:10.953043502 +0000 UTC m=+691.979287556" observedRunningTime="2025-12-11 18:12:11.431287015 +0000 UTC m=+692.457531069" watchObservedRunningTime="2025-12-11 18:12:11.436962571 +0000 UTC m=+692.463206615" Dec 11 18:12:11 crc kubenswrapper[4877]: I1211 18:12:11.522763 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7bj9b"] Dec 11 18:12:20 crc kubenswrapper[4877]: I1211 18:12:20.755191 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-zxfhk" Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.204399 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx"] Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.207232 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.211318 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.221887 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx"] Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.343333 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx\" (UID: \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.343579 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr6q9\" (UniqueName: \"kubernetes.io/projected/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-kube-api-access-zr6q9\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx\" (UID: \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.343641 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx\" (UID: \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.445165 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx\" (UID: \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.445290 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr6q9\" (UniqueName: \"kubernetes.io/projected/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-kube-api-access-zr6q9\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx\" (UID: \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.445335 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx\" (UID: \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.445927 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx\" (UID: \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.445962 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx\" (UID: \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.469264 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr6q9\" (UniqueName: \"kubernetes.io/projected/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-kube-api-access-zr6q9\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx\" (UID: \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.539074 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" Dec 11 18:12:35 crc kubenswrapper[4877]: I1211 18:12:35.778428 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx"] Dec 11 18:12:35 crc kubenswrapper[4877]: W1211 18:12:35.785887 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd0b28b4_1e6e_45dc_8ed7_74c641bccaf9.slice/crio-85812f74ef080abb9896128aafcb0b3e9ba97c55c4934490ac218373a52551cf WatchSource:0}: Error finding container 85812f74ef080abb9896128aafcb0b3e9ba97c55c4934490ac218373a52551cf: Status 404 returned error can't find the container with id 85812f74ef080abb9896128aafcb0b3e9ba97c55c4934490ac218373a52551cf Dec 11 18:12:36 crc kubenswrapper[4877]: I1211 18:12:36.567981 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-7bj9b" podUID="88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" containerName="console" containerID="cri-o://82ea4e140a60382319cfb269dab3199169d6ec642a5fc6c7ad7a718f2464e11f" gracePeriod=15 Dec 11 18:12:36 crc kubenswrapper[4877]: I1211 18:12:36.578935 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" event={"ID":"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9","Type":"ContainerStarted","Data":"85812f74ef080abb9896128aafcb0b3e9ba97c55c4934490ac218373a52551cf"} Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.592690 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7bj9b_88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7/console/0.log" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.592756 4877 generic.go:334] "Generic (PLEG): container finished" podID="88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" containerID="82ea4e140a60382319cfb269dab3199169d6ec642a5fc6c7ad7a718f2464e11f" exitCode=2 Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.592877 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7bj9b" event={"ID":"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7","Type":"ContainerDied","Data":"82ea4e140a60382319cfb269dab3199169d6ec642a5fc6c7ad7a718f2464e11f"} Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.596659 4877 generic.go:334] "Generic (PLEG): container finished" podID="cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9" containerID="6d07609f508153db7a9ea7d85f0cb889952929c47755e86ac2ec204bda91c15b" exitCode=0 Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.596719 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" event={"ID":"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9","Type":"ContainerDied","Data":"6d07609f508153db7a9ea7d85f0cb889952929c47755e86ac2ec204bda91c15b"} Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.707713 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7bj9b_88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7/console/0.log" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.708148 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.882896 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zcgf\" (UniqueName: \"kubernetes.io/projected/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-kube-api-access-9zcgf\") pod \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.883091 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-oauth-serving-cert\") pod \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.883138 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-service-ca\") pod \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.883225 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-serving-cert\") pod \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.883299 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-trusted-ca-bundle\") pod \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.883332 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-config\") pod \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.883422 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-oauth-config\") pod \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\" (UID: \"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7\") " Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.884301 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" (UID: "88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.884476 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" (UID: "88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.884505 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-config" (OuterVolumeSpecName: "console-config") pod "88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" (UID: "88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.885276 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-service-ca" (OuterVolumeSpecName: "service-ca") pod "88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" (UID: "88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.891080 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" (UID: "88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.892020 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" (UID: "88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.892303 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-kube-api-access-9zcgf" (OuterVolumeSpecName: "kube-api-access-9zcgf") pod "88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" (UID: "88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7"). InnerVolumeSpecName "kube-api-access-9zcgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.985559 4877 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.985613 4877 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.985624 4877 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.985634 4877 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.985644 4877 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.985653 4877 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:12:37 crc kubenswrapper[4877]: I1211 18:12:37.985664 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zcgf\" (UniqueName: \"kubernetes.io/projected/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7-kube-api-access-9zcgf\") on node \"crc\" DevicePath \"\"" Dec 11 18:12:38 crc kubenswrapper[4877]: I1211 18:12:38.603193 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7bj9b_88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7/console/0.log" Dec 11 18:12:38 crc kubenswrapper[4877]: I1211 18:12:38.603260 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7bj9b" event={"ID":"88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7","Type":"ContainerDied","Data":"846e66d9d8afdfc1057bce57efd506bf9128b5aada3f713a0369588c7320467a"} Dec 11 18:12:38 crc kubenswrapper[4877]: I1211 18:12:38.603298 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7bj9b" Dec 11 18:12:38 crc kubenswrapper[4877]: I1211 18:12:38.603311 4877 scope.go:117] "RemoveContainer" containerID="82ea4e140a60382319cfb269dab3199169d6ec642a5fc6c7ad7a718f2464e11f" Dec 11 18:12:38 crc kubenswrapper[4877]: I1211 18:12:38.638356 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7bj9b"] Dec 11 18:12:38 crc kubenswrapper[4877]: I1211 18:12:38.643521 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-7bj9b"] Dec 11 18:12:38 crc kubenswrapper[4877]: E1211 18:12:38.711947 4877 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88e4b0ab_f4ce_4b60_9d09_5eab4aa3e4f7.slice/crio-846e66d9d8afdfc1057bce57efd506bf9128b5aada3f713a0369588c7320467a\": RecentStats: unable to find data in memory cache]" Dec 11 18:12:39 crc kubenswrapper[4877]: I1211 18:12:39.223875 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" path="/var/lib/kubelet/pods/88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7/volumes" Dec 11 18:12:39 crc kubenswrapper[4877]: I1211 18:12:39.613001 4877 generic.go:334] "Generic (PLEG): container finished" podID="cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9" containerID="784e5195df931074401a12a2d1c3102bceac59d7a92f1efeffcde743c382d0e2" exitCode=0 Dec 11 18:12:39 crc kubenswrapper[4877]: I1211 18:12:39.613060 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" event={"ID":"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9","Type":"ContainerDied","Data":"784e5195df931074401a12a2d1c3102bceac59d7a92f1efeffcde743c382d0e2"} Dec 11 18:12:40 crc kubenswrapper[4877]: I1211 18:12:40.627834 4877 generic.go:334] "Generic (PLEG): container finished" podID="cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9" containerID="21a718cb111fd46d8a3c7ea65f45893996e4089a955a71dfdd6cdfc736201a09" exitCode=0 Dec 11 18:12:40 crc kubenswrapper[4877]: I1211 18:12:40.627908 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" event={"ID":"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9","Type":"ContainerDied","Data":"21a718cb111fd46d8a3c7ea65f45893996e4089a955a71dfdd6cdfc736201a09"} Dec 11 18:12:41 crc kubenswrapper[4877]: I1211 18:12:41.910884 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" Dec 11 18:12:42 crc kubenswrapper[4877]: I1211 18:12:42.043859 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-bundle\") pod \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\" (UID: \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\") " Dec 11 18:12:42 crc kubenswrapper[4877]: I1211 18:12:42.043961 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr6q9\" (UniqueName: \"kubernetes.io/projected/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-kube-api-access-zr6q9\") pod \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\" (UID: \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\") " Dec 11 18:12:42 crc kubenswrapper[4877]: I1211 18:12:42.044018 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-util\") pod \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\" (UID: \"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9\") " Dec 11 18:12:42 crc kubenswrapper[4877]: I1211 18:12:42.045647 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-bundle" (OuterVolumeSpecName: "bundle") pod "cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9" (UID: "cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:12:42 crc kubenswrapper[4877]: I1211 18:12:42.054707 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-kube-api-access-zr6q9" (OuterVolumeSpecName: "kube-api-access-zr6q9") pod "cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9" (UID: "cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9"). InnerVolumeSpecName "kube-api-access-zr6q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:12:42 crc kubenswrapper[4877]: I1211 18:12:42.079047 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-util" (OuterVolumeSpecName: "util") pod "cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9" (UID: "cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:12:42 crc kubenswrapper[4877]: I1211 18:12:42.145775 4877 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:12:42 crc kubenswrapper[4877]: I1211 18:12:42.145853 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr6q9\" (UniqueName: \"kubernetes.io/projected/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-kube-api-access-zr6q9\") on node \"crc\" DevicePath \"\"" Dec 11 18:12:42 crc kubenswrapper[4877]: I1211 18:12:42.145873 4877 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9-util\") on node \"crc\" DevicePath \"\"" Dec 11 18:12:42 crc kubenswrapper[4877]: I1211 18:12:42.646957 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" event={"ID":"cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9","Type":"ContainerDied","Data":"85812f74ef080abb9896128aafcb0b3e9ba97c55c4934490ac218373a52551cf"} Dec 11 18:12:42 crc kubenswrapper[4877]: I1211 18:12:42.647022 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85812f74ef080abb9896128aafcb0b3e9ba97c55c4934490ac218373a52551cf" Dec 11 18:12:42 crc kubenswrapper[4877]: I1211 18:12:42.647082 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.283753 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c"] Dec 11 18:12:54 crc kubenswrapper[4877]: E1211 18:12:54.284703 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" containerName="console" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.284724 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" containerName="console" Dec 11 18:12:54 crc kubenswrapper[4877]: E1211 18:12:54.284745 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9" containerName="util" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.284755 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9" containerName="util" Dec 11 18:12:54 crc kubenswrapper[4877]: E1211 18:12:54.284766 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9" containerName="pull" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.284775 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9" containerName="pull" Dec 11 18:12:54 crc kubenswrapper[4877]: E1211 18:12:54.284793 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9" containerName="extract" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.284801 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9" containerName="extract" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.284975 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e4b0ab-f4ce-4b60-9d09-5eab4aa3e4f7" containerName="console" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.284993 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9" containerName="extract" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.285632 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.288594 4877 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.288600 4877 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.288658 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.301583 4877 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-97xkh" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.302284 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.329889 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c"] Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.331711 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr4n9\" (UniqueName: \"kubernetes.io/projected/ab8873f5-e97e-483c-a6f4-dad1a15fb382-kube-api-access-gr4n9\") pod \"metallb-operator-controller-manager-554f49ddd5-7c57c\" (UID: \"ab8873f5-e97e-483c-a6f4-dad1a15fb382\") " pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.331780 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ab8873f5-e97e-483c-a6f4-dad1a15fb382-apiservice-cert\") pod \"metallb-operator-controller-manager-554f49ddd5-7c57c\" (UID: \"ab8873f5-e97e-483c-a6f4-dad1a15fb382\") " pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.331836 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ab8873f5-e97e-483c-a6f4-dad1a15fb382-webhook-cert\") pod \"metallb-operator-controller-manager-554f49ddd5-7c57c\" (UID: \"ab8873f5-e97e-483c-a6f4-dad1a15fb382\") " pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.432930 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr4n9\" (UniqueName: \"kubernetes.io/projected/ab8873f5-e97e-483c-a6f4-dad1a15fb382-kube-api-access-gr4n9\") pod \"metallb-operator-controller-manager-554f49ddd5-7c57c\" (UID: \"ab8873f5-e97e-483c-a6f4-dad1a15fb382\") " pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.432990 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ab8873f5-e97e-483c-a6f4-dad1a15fb382-apiservice-cert\") pod \"metallb-operator-controller-manager-554f49ddd5-7c57c\" (UID: \"ab8873f5-e97e-483c-a6f4-dad1a15fb382\") " pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.433037 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ab8873f5-e97e-483c-a6f4-dad1a15fb382-webhook-cert\") pod \"metallb-operator-controller-manager-554f49ddd5-7c57c\" (UID: \"ab8873f5-e97e-483c-a6f4-dad1a15fb382\") " pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.441617 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ab8873f5-e97e-483c-a6f4-dad1a15fb382-apiservice-cert\") pod \"metallb-operator-controller-manager-554f49ddd5-7c57c\" (UID: \"ab8873f5-e97e-483c-a6f4-dad1a15fb382\") " pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.442122 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ab8873f5-e97e-483c-a6f4-dad1a15fb382-webhook-cert\") pod \"metallb-operator-controller-manager-554f49ddd5-7c57c\" (UID: \"ab8873f5-e97e-483c-a6f4-dad1a15fb382\") " pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.451486 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr4n9\" (UniqueName: \"kubernetes.io/projected/ab8873f5-e97e-483c-a6f4-dad1a15fb382-kube-api-access-gr4n9\") pod \"metallb-operator-controller-manager-554f49ddd5-7c57c\" (UID: \"ab8873f5-e97e-483c-a6f4-dad1a15fb382\") " pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.534923 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp"] Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.535829 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.538693 4877 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-xvtxg" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.539873 4877 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.540096 4877 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.549193 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp"] Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.631897 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.737124 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/36cfd319-6f46-4547-bc92-6d8f108f556b-apiservice-cert\") pod \"metallb-operator-webhook-server-bf4cbf554-xwtvp\" (UID: \"36cfd319-6f46-4547-bc92-6d8f108f556b\") " pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.737186 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jg7v\" (UniqueName: \"kubernetes.io/projected/36cfd319-6f46-4547-bc92-6d8f108f556b-kube-api-access-7jg7v\") pod \"metallb-operator-webhook-server-bf4cbf554-xwtvp\" (UID: \"36cfd319-6f46-4547-bc92-6d8f108f556b\") " pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.737209 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36cfd319-6f46-4547-bc92-6d8f108f556b-webhook-cert\") pod \"metallb-operator-webhook-server-bf4cbf554-xwtvp\" (UID: \"36cfd319-6f46-4547-bc92-6d8f108f556b\") " pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.840270 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jg7v\" (UniqueName: \"kubernetes.io/projected/36cfd319-6f46-4547-bc92-6d8f108f556b-kube-api-access-7jg7v\") pod \"metallb-operator-webhook-server-bf4cbf554-xwtvp\" (UID: \"36cfd319-6f46-4547-bc92-6d8f108f556b\") " pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.840841 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36cfd319-6f46-4547-bc92-6d8f108f556b-webhook-cert\") pod \"metallb-operator-webhook-server-bf4cbf554-xwtvp\" (UID: \"36cfd319-6f46-4547-bc92-6d8f108f556b\") " pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.840952 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/36cfd319-6f46-4547-bc92-6d8f108f556b-apiservice-cert\") pod \"metallb-operator-webhook-server-bf4cbf554-xwtvp\" (UID: \"36cfd319-6f46-4547-bc92-6d8f108f556b\") " pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.851555 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/36cfd319-6f46-4547-bc92-6d8f108f556b-apiservice-cert\") pod \"metallb-operator-webhook-server-bf4cbf554-xwtvp\" (UID: \"36cfd319-6f46-4547-bc92-6d8f108f556b\") " pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.851939 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36cfd319-6f46-4547-bc92-6d8f108f556b-webhook-cert\") pod \"metallb-operator-webhook-server-bf4cbf554-xwtvp\" (UID: \"36cfd319-6f46-4547-bc92-6d8f108f556b\") " pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" Dec 11 18:12:54 crc kubenswrapper[4877]: I1211 18:12:54.867410 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jg7v\" (UniqueName: \"kubernetes.io/projected/36cfd319-6f46-4547-bc92-6d8f108f556b-kube-api-access-7jg7v\") pod \"metallb-operator-webhook-server-bf4cbf554-xwtvp\" (UID: \"36cfd319-6f46-4547-bc92-6d8f108f556b\") " pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" Dec 11 18:12:55 crc kubenswrapper[4877]: I1211 18:12:55.154142 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" Dec 11 18:12:55 crc kubenswrapper[4877]: I1211 18:12:55.182846 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c"] Dec 11 18:12:55 crc kubenswrapper[4877]: W1211 18:12:55.192202 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab8873f5_e97e_483c_a6f4_dad1a15fb382.slice/crio-e8789223c6588b93faeefe3d60244ee68501ed9115c7e1dadc6629fa9cf8a098 WatchSource:0}: Error finding container e8789223c6588b93faeefe3d60244ee68501ed9115c7e1dadc6629fa9cf8a098: Status 404 returned error can't find the container with id e8789223c6588b93faeefe3d60244ee68501ed9115c7e1dadc6629fa9cf8a098 Dec 11 18:12:55 crc kubenswrapper[4877]: I1211 18:12:55.632615 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp"] Dec 11 18:12:55 crc kubenswrapper[4877]: W1211 18:12:55.654676 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36cfd319_6f46_4547_bc92_6d8f108f556b.slice/crio-2f0061551fefad3d03ff8329b59ed4515538560578e0da22bf339a929c3f1622 WatchSource:0}: Error finding container 2f0061551fefad3d03ff8329b59ed4515538560578e0da22bf339a929c3f1622: Status 404 returned error can't find the container with id 2f0061551fefad3d03ff8329b59ed4515538560578e0da22bf339a929c3f1622 Dec 11 18:12:55 crc kubenswrapper[4877]: I1211 18:12:55.728123 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" event={"ID":"ab8873f5-e97e-483c-a6f4-dad1a15fb382","Type":"ContainerStarted","Data":"e8789223c6588b93faeefe3d60244ee68501ed9115c7e1dadc6629fa9cf8a098"} Dec 11 18:12:55 crc kubenswrapper[4877]: I1211 18:12:55.729365 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" event={"ID":"36cfd319-6f46-4547-bc92-6d8f108f556b","Type":"ContainerStarted","Data":"2f0061551fefad3d03ff8329b59ed4515538560578e0da22bf339a929c3f1622"} Dec 11 18:13:03 crc kubenswrapper[4877]: I1211 18:13:03.818437 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" event={"ID":"36cfd319-6f46-4547-bc92-6d8f108f556b","Type":"ContainerStarted","Data":"19e6c26cd8a062c6af09f0ea433348e0f24ae879a64ef99365187cff2260ba82"} Dec 11 18:13:03 crc kubenswrapper[4877]: I1211 18:13:03.819451 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" Dec 11 18:13:03 crc kubenswrapper[4877]: I1211 18:13:03.821652 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" event={"ID":"ab8873f5-e97e-483c-a6f4-dad1a15fb382","Type":"ContainerStarted","Data":"42f579a51c6c910d28e8e395e01bf633b4091b25dc8a86bd420da9362044cc12"} Dec 11 18:13:03 crc kubenswrapper[4877]: I1211 18:13:03.821894 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" Dec 11 18:13:03 crc kubenswrapper[4877]: I1211 18:13:03.848770 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" podStartSLOduration=2.599772292 podStartE2EDuration="9.848750048s" podCreationTimestamp="2025-12-11 18:12:54 +0000 UTC" firstStartedPulling="2025-12-11 18:12:55.65713786 +0000 UTC m=+736.683381904" lastFinishedPulling="2025-12-11 18:13:02.906115606 +0000 UTC m=+743.932359660" observedRunningTime="2025-12-11 18:13:03.841484917 +0000 UTC m=+744.867728971" watchObservedRunningTime="2025-12-11 18:13:03.848750048 +0000 UTC m=+744.874994092" Dec 11 18:13:13 crc kubenswrapper[4877]: I1211 18:13:13.177435 4877 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 18:13:15 crc kubenswrapper[4877]: I1211 18:13:15.161326 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-bf4cbf554-xwtvp" Dec 11 18:13:15 crc kubenswrapper[4877]: I1211 18:13:15.185754 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" podStartSLOduration=13.495517593 podStartE2EDuration="21.185725788s" podCreationTimestamp="2025-12-11 18:12:54 +0000 UTC" firstStartedPulling="2025-12-11 18:12:55.195498926 +0000 UTC m=+736.221742970" lastFinishedPulling="2025-12-11 18:13:02.885707121 +0000 UTC m=+743.911951165" observedRunningTime="2025-12-11 18:13:03.870296625 +0000 UTC m=+744.896540699" watchObservedRunningTime="2025-12-11 18:13:15.185725788 +0000 UTC m=+756.211969862" Dec 11 18:13:16 crc kubenswrapper[4877]: I1211 18:13:16.637947 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:13:16 crc kubenswrapper[4877]: I1211 18:13:16.638882 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:13:34 crc kubenswrapper[4877]: I1211 18:13:34.635563 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-554f49ddd5-7c57c" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.360064 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-2sg4m"] Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.363324 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.368290 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh"] Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.370524 4877 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.370786 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.370819 4877 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2qvkg" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.372005 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.375134 4877 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.379805 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh"] Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.470167 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3dad0c6-8977-4cff-9866-e95d84ccc658-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-v5sxh\" (UID: \"d3dad0c6-8977-4cff-9866-e95d84ccc658\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.470300 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdwdz\" (UniqueName: \"kubernetes.io/projected/d3dad0c6-8977-4cff-9866-e95d84ccc658-kube-api-access-pdwdz\") pod \"frr-k8s-webhook-server-7784b6fcf-v5sxh\" (UID: \"d3dad0c6-8977-4cff-9866-e95d84ccc658\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.470358 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e75b60e5-f320-42d0-8ebc-4aa90962ced4-metrics-certs\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.470414 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e75b60e5-f320-42d0-8ebc-4aa90962ced4-frr-startup\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.470453 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxrzq\" (UniqueName: \"kubernetes.io/projected/e75b60e5-f320-42d0-8ebc-4aa90962ced4-kube-api-access-fxrzq\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.470481 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e75b60e5-f320-42d0-8ebc-4aa90962ced4-frr-sockets\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.470516 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e75b60e5-f320-42d0-8ebc-4aa90962ced4-metrics\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.470553 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e75b60e5-f320-42d0-8ebc-4aa90962ced4-frr-conf\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.470608 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e75b60e5-f320-42d0-8ebc-4aa90962ced4-reloader\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.471937 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-9hdjf"] Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.473048 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9hdjf" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.475872 4877 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-lnrcz" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.478667 4877 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.478667 4877 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.479097 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.493833 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-jtbmt"] Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.495096 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-jtbmt" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.497018 4877 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.527938 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-jtbmt"] Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572102 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e75b60e5-f320-42d0-8ebc-4aa90962ced4-metrics-certs\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572167 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-memberlist\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572200 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/786d79d5-52c0-410a-b4e4-b3df71e617ba-metrics-certs\") pod \"controller-5bddd4b946-jtbmt\" (UID: \"786d79d5-52c0-410a-b4e4-b3df71e617ba\") " pod="metallb-system/controller-5bddd4b946-jtbmt" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572227 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e75b60e5-f320-42d0-8ebc-4aa90962ced4-frr-startup\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572256 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxrzq\" (UniqueName: \"kubernetes.io/projected/e75b60e5-f320-42d0-8ebc-4aa90962ced4-kube-api-access-fxrzq\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572278 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/661affbe-08b4-406d-b4e4-78cbefa4de67-metallb-excludel2\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572298 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e75b60e5-f320-42d0-8ebc-4aa90962ced4-frr-sockets\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572323 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s769v\" (UniqueName: \"kubernetes.io/projected/786d79d5-52c0-410a-b4e4-b3df71e617ba-kube-api-access-s769v\") pod \"controller-5bddd4b946-jtbmt\" (UID: \"786d79d5-52c0-410a-b4e4-b3df71e617ba\") " pod="metallb-system/controller-5bddd4b946-jtbmt" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572344 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e75b60e5-f320-42d0-8ebc-4aa90962ced4-metrics\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572366 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/786d79d5-52c0-410a-b4e4-b3df71e617ba-cert\") pod \"controller-5bddd4b946-jtbmt\" (UID: \"786d79d5-52c0-410a-b4e4-b3df71e617ba\") " pod="metallb-system/controller-5bddd4b946-jtbmt" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572408 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e75b60e5-f320-42d0-8ebc-4aa90962ced4-frr-conf\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572444 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-metrics-certs\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572469 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e75b60e5-f320-42d0-8ebc-4aa90962ced4-reloader\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572489 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3dad0c6-8977-4cff-9866-e95d84ccc658-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-v5sxh\" (UID: \"d3dad0c6-8977-4cff-9866-e95d84ccc658\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572536 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdwdz\" (UniqueName: \"kubernetes.io/projected/d3dad0c6-8977-4cff-9866-e95d84ccc658-kube-api-access-pdwdz\") pod \"frr-k8s-webhook-server-7784b6fcf-v5sxh\" (UID: \"d3dad0c6-8977-4cff-9866-e95d84ccc658\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.572558 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7mfp\" (UniqueName: \"kubernetes.io/projected/661affbe-08b4-406d-b4e4-78cbefa4de67-kube-api-access-c7mfp\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.574574 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e75b60e5-f320-42d0-8ebc-4aa90962ced4-metrics\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.574573 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e75b60e5-f320-42d0-8ebc-4aa90962ced4-reloader\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.574789 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e75b60e5-f320-42d0-8ebc-4aa90962ced4-frr-conf\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.575638 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e75b60e5-f320-42d0-8ebc-4aa90962ced4-frr-sockets\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.575669 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e75b60e5-f320-42d0-8ebc-4aa90962ced4-frr-startup\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.581625 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e75b60e5-f320-42d0-8ebc-4aa90962ced4-metrics-certs\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.589455 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3dad0c6-8977-4cff-9866-e95d84ccc658-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-v5sxh\" (UID: \"d3dad0c6-8977-4cff-9866-e95d84ccc658\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.592432 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxrzq\" (UniqueName: \"kubernetes.io/projected/e75b60e5-f320-42d0-8ebc-4aa90962ced4-kube-api-access-fxrzq\") pod \"frr-k8s-2sg4m\" (UID: \"e75b60e5-f320-42d0-8ebc-4aa90962ced4\") " pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.593994 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdwdz\" (UniqueName: \"kubernetes.io/projected/d3dad0c6-8977-4cff-9866-e95d84ccc658-kube-api-access-pdwdz\") pod \"frr-k8s-webhook-server-7784b6fcf-v5sxh\" (UID: \"d3dad0c6-8977-4cff-9866-e95d84ccc658\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.673613 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7mfp\" (UniqueName: \"kubernetes.io/projected/661affbe-08b4-406d-b4e4-78cbefa4de67-kube-api-access-c7mfp\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.673703 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-memberlist\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.673736 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/786d79d5-52c0-410a-b4e4-b3df71e617ba-metrics-certs\") pod \"controller-5bddd4b946-jtbmt\" (UID: \"786d79d5-52c0-410a-b4e4-b3df71e617ba\") " pod="metallb-system/controller-5bddd4b946-jtbmt" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.673772 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/661affbe-08b4-406d-b4e4-78cbefa4de67-metallb-excludel2\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.673797 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s769v\" (UniqueName: \"kubernetes.io/projected/786d79d5-52c0-410a-b4e4-b3df71e617ba-kube-api-access-s769v\") pod \"controller-5bddd4b946-jtbmt\" (UID: \"786d79d5-52c0-410a-b4e4-b3df71e617ba\") " pod="metallb-system/controller-5bddd4b946-jtbmt" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.673825 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/786d79d5-52c0-410a-b4e4-b3df71e617ba-cert\") pod \"controller-5bddd4b946-jtbmt\" (UID: \"786d79d5-52c0-410a-b4e4-b3df71e617ba\") " pod="metallb-system/controller-5bddd4b946-jtbmt" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.673860 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-metrics-certs\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:35 crc kubenswrapper[4877]: E1211 18:13:35.674027 4877 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 11 18:13:35 crc kubenswrapper[4877]: E1211 18:13:35.674103 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-metrics-certs podName:661affbe-08b4-406d-b4e4-78cbefa4de67 nodeName:}" failed. No retries permitted until 2025-12-11 18:13:36.174078564 +0000 UTC m=+777.200322608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-metrics-certs") pod "speaker-9hdjf" (UID: "661affbe-08b4-406d-b4e4-78cbefa4de67") : secret "speaker-certs-secret" not found Dec 11 18:13:35 crc kubenswrapper[4877]: E1211 18:13:35.674243 4877 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 11 18:13:35 crc kubenswrapper[4877]: E1211 18:13:35.674345 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-memberlist podName:661affbe-08b4-406d-b4e4-78cbefa4de67 nodeName:}" failed. No retries permitted until 2025-12-11 18:13:36.174319401 +0000 UTC m=+777.200563535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-memberlist") pod "speaker-9hdjf" (UID: "661affbe-08b4-406d-b4e4-78cbefa4de67") : secret "metallb-memberlist" not found Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.675188 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/661affbe-08b4-406d-b4e4-78cbefa4de67-metallb-excludel2\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.677830 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/786d79d5-52c0-410a-b4e4-b3df71e617ba-metrics-certs\") pod \"controller-5bddd4b946-jtbmt\" (UID: \"786d79d5-52c0-410a-b4e4-b3df71e617ba\") " pod="metallb-system/controller-5bddd4b946-jtbmt" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.677839 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/786d79d5-52c0-410a-b4e4-b3df71e617ba-cert\") pod \"controller-5bddd4b946-jtbmt\" (UID: \"786d79d5-52c0-410a-b4e4-b3df71e617ba\") " pod="metallb-system/controller-5bddd4b946-jtbmt" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.687986 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.694212 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7mfp\" (UniqueName: \"kubernetes.io/projected/661affbe-08b4-406d-b4e4-78cbefa4de67-kube-api-access-c7mfp\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.694328 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s769v\" (UniqueName: \"kubernetes.io/projected/786d79d5-52c0-410a-b4e4-b3df71e617ba-kube-api-access-s769v\") pod \"controller-5bddd4b946-jtbmt\" (UID: \"786d79d5-52c0-410a-b4e4-b3df71e617ba\") " pod="metallb-system/controller-5bddd4b946-jtbmt" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.702264 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh" Dec 11 18:13:35 crc kubenswrapper[4877]: I1211 18:13:35.834036 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-jtbmt" Dec 11 18:13:36 crc kubenswrapper[4877]: I1211 18:13:36.109473 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-jtbmt"] Dec 11 18:13:36 crc kubenswrapper[4877]: W1211 18:13:36.112338 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod786d79d5_52c0_410a_b4e4_b3df71e617ba.slice/crio-88b0f5182ca3799bfa3ea747b61f0352c446875df6d821e84df16b4556dc6535 WatchSource:0}: Error finding container 88b0f5182ca3799bfa3ea747b61f0352c446875df6d821e84df16b4556dc6535: Status 404 returned error can't find the container with id 88b0f5182ca3799bfa3ea747b61f0352c446875df6d821e84df16b4556dc6535 Dec 11 18:13:36 crc kubenswrapper[4877]: I1211 18:13:36.180485 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-metrics-certs\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:36 crc kubenswrapper[4877]: I1211 18:13:36.180928 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-memberlist\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:36 crc kubenswrapper[4877]: E1211 18:13:36.181076 4877 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 11 18:13:36 crc kubenswrapper[4877]: E1211 18:13:36.181141 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-memberlist podName:661affbe-08b4-406d-b4e4-78cbefa4de67 nodeName:}" failed. No retries permitted until 2025-12-11 18:13:37.181122204 +0000 UTC m=+778.207366248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-memberlist") pod "speaker-9hdjf" (UID: "661affbe-08b4-406d-b4e4-78cbefa4de67") : secret "metallb-memberlist" not found Dec 11 18:13:36 crc kubenswrapper[4877]: I1211 18:13:36.186531 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-metrics-certs\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:36 crc kubenswrapper[4877]: I1211 18:13:36.220239 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh"] Dec 11 18:13:36 crc kubenswrapper[4877]: W1211 18:13:36.221964 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3dad0c6_8977_4cff_9866_e95d84ccc658.slice/crio-4a00d8fb717dc1904c4b63cd8c519e906e9199cfd83535b0f567faf71baab1d7 WatchSource:0}: Error finding container 4a00d8fb717dc1904c4b63cd8c519e906e9199cfd83535b0f567faf71baab1d7: Status 404 returned error can't find the container with id 4a00d8fb717dc1904c4b63cd8c519e906e9199cfd83535b0f567faf71baab1d7 Dec 11 18:13:37 crc kubenswrapper[4877]: I1211 18:13:37.050203 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh" event={"ID":"d3dad0c6-8977-4cff-9866-e95d84ccc658","Type":"ContainerStarted","Data":"4a00d8fb717dc1904c4b63cd8c519e906e9199cfd83535b0f567faf71baab1d7"} Dec 11 18:13:37 crc kubenswrapper[4877]: I1211 18:13:37.053440 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-jtbmt" event={"ID":"786d79d5-52c0-410a-b4e4-b3df71e617ba","Type":"ContainerStarted","Data":"817ce1b1594fd69a1c914ade6ba1cec50420a85292be42eba448286e84814594"} Dec 11 18:13:37 crc kubenswrapper[4877]: I1211 18:13:37.053512 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-jtbmt" event={"ID":"786d79d5-52c0-410a-b4e4-b3df71e617ba","Type":"ContainerStarted","Data":"701a25779d5387012b37a9e68aa617764ff0e719c472ae44523ad69caacfdce2"} Dec 11 18:13:37 crc kubenswrapper[4877]: I1211 18:13:37.053531 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-jtbmt" event={"ID":"786d79d5-52c0-410a-b4e4-b3df71e617ba","Type":"ContainerStarted","Data":"88b0f5182ca3799bfa3ea747b61f0352c446875df6d821e84df16b4556dc6535"} Dec 11 18:13:37 crc kubenswrapper[4877]: I1211 18:13:37.053635 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-jtbmt" Dec 11 18:13:37 crc kubenswrapper[4877]: I1211 18:13:37.055461 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2sg4m" event={"ID":"e75b60e5-f320-42d0-8ebc-4aa90962ced4","Type":"ContainerStarted","Data":"99d7c3bbd49fc05a77e086d2e17602ec2204f4558638eb07c8c013a01811f815"} Dec 11 18:13:37 crc kubenswrapper[4877]: I1211 18:13:37.074093 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-jtbmt" podStartSLOduration=2.074071411 podStartE2EDuration="2.074071411s" podCreationTimestamp="2025-12-11 18:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:13:37.071882651 +0000 UTC m=+778.098126705" watchObservedRunningTime="2025-12-11 18:13:37.074071411 +0000 UTC m=+778.100315455" Dec 11 18:13:37 crc kubenswrapper[4877]: I1211 18:13:37.197345 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-memberlist\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:37 crc kubenswrapper[4877]: I1211 18:13:37.204983 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/661affbe-08b4-406d-b4e4-78cbefa4de67-memberlist\") pod \"speaker-9hdjf\" (UID: \"661affbe-08b4-406d-b4e4-78cbefa4de67\") " pod="metallb-system/speaker-9hdjf" Dec 11 18:13:37 crc kubenswrapper[4877]: I1211 18:13:37.289301 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9hdjf" Dec 11 18:13:37 crc kubenswrapper[4877]: W1211 18:13:37.323204 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod661affbe_08b4_406d_b4e4_78cbefa4de67.slice/crio-f07cc343d280108ef26ea1fb650d5c55f2b17abf642f6e3c0b2461310c0b9010 WatchSource:0}: Error finding container f07cc343d280108ef26ea1fb650d5c55f2b17abf642f6e3c0b2461310c0b9010: Status 404 returned error can't find the container with id f07cc343d280108ef26ea1fb650d5c55f2b17abf642f6e3c0b2461310c0b9010 Dec 11 18:13:38 crc kubenswrapper[4877]: I1211 18:13:38.073060 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9hdjf" event={"ID":"661affbe-08b4-406d-b4e4-78cbefa4de67","Type":"ContainerStarted","Data":"2c95151129648215fe00a6e6240de24049e63f95882fd1e67729be260487722d"} Dec 11 18:13:38 crc kubenswrapper[4877]: I1211 18:13:38.073576 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9hdjf" event={"ID":"661affbe-08b4-406d-b4e4-78cbefa4de67","Type":"ContainerStarted","Data":"27d0ff98e4e8e6355f7754692f80aca2fabfb39d80edb41a9a1f6024e481d319"} Dec 11 18:13:38 crc kubenswrapper[4877]: I1211 18:13:38.073603 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9hdjf" event={"ID":"661affbe-08b4-406d-b4e4-78cbefa4de67","Type":"ContainerStarted","Data":"f07cc343d280108ef26ea1fb650d5c55f2b17abf642f6e3c0b2461310c0b9010"} Dec 11 18:13:38 crc kubenswrapper[4877]: I1211 18:13:38.074208 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9hdjf" Dec 11 18:13:38 crc kubenswrapper[4877]: I1211 18:13:38.108275 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-9hdjf" podStartSLOduration=3.108255516 podStartE2EDuration="3.108255516s" podCreationTimestamp="2025-12-11 18:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:13:38.101277164 +0000 UTC m=+779.127521208" watchObservedRunningTime="2025-12-11 18:13:38.108255516 +0000 UTC m=+779.134499560" Dec 11 18:13:46 crc kubenswrapper[4877]: I1211 18:13:46.141564 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh" event={"ID":"d3dad0c6-8977-4cff-9866-e95d84ccc658","Type":"ContainerStarted","Data":"4337780bf120a4cfcec46401449206a8853bb67be0d1a7ab9bb62bed2aae071c"} Dec 11 18:13:46 crc kubenswrapper[4877]: I1211 18:13:46.142338 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh" Dec 11 18:13:46 crc kubenswrapper[4877]: I1211 18:13:46.143335 4877 generic.go:334] "Generic (PLEG): container finished" podID="e75b60e5-f320-42d0-8ebc-4aa90962ced4" containerID="ec960f87252239f525c14db7ed7149daad7eb529ea9d790d192c56ae8ee4dc6f" exitCode=0 Dec 11 18:13:46 crc kubenswrapper[4877]: I1211 18:13:46.143407 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2sg4m" event={"ID":"e75b60e5-f320-42d0-8ebc-4aa90962ced4","Type":"ContainerDied","Data":"ec960f87252239f525c14db7ed7149daad7eb529ea9d790d192c56ae8ee4dc6f"} Dec 11 18:13:46 crc kubenswrapper[4877]: I1211 18:13:46.167236 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh" podStartSLOduration=2.358096579 podStartE2EDuration="11.167205954s" podCreationTimestamp="2025-12-11 18:13:35 +0000 UTC" firstStartedPulling="2025-12-11 18:13:36.225614865 +0000 UTC m=+777.251858909" lastFinishedPulling="2025-12-11 18:13:45.03472424 +0000 UTC m=+786.060968284" observedRunningTime="2025-12-11 18:13:46.161103678 +0000 UTC m=+787.187347712" watchObservedRunningTime="2025-12-11 18:13:46.167205954 +0000 UTC m=+787.193450018" Dec 11 18:13:46 crc kubenswrapper[4877]: I1211 18:13:46.638174 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:13:46 crc kubenswrapper[4877]: I1211 18:13:46.638733 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:13:47 crc kubenswrapper[4877]: I1211 18:13:47.167831 4877 generic.go:334] "Generic (PLEG): container finished" podID="e75b60e5-f320-42d0-8ebc-4aa90962ced4" containerID="90593e51f9bfb7fb467ed64f5fe55aaf97ae4447c84c0c7eb184e79d5627f271" exitCode=0 Dec 11 18:13:47 crc kubenswrapper[4877]: I1211 18:13:47.167969 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2sg4m" event={"ID":"e75b60e5-f320-42d0-8ebc-4aa90962ced4","Type":"ContainerDied","Data":"90593e51f9bfb7fb467ed64f5fe55aaf97ae4447c84c0c7eb184e79d5627f271"} Dec 11 18:13:47 crc kubenswrapper[4877]: I1211 18:13:47.294842 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-9hdjf" Dec 11 18:13:48 crc kubenswrapper[4877]: I1211 18:13:48.176447 4877 generic.go:334] "Generic (PLEG): container finished" podID="e75b60e5-f320-42d0-8ebc-4aa90962ced4" containerID="846bff8f78f4a746a55e4d5b03d7f68ddfc1d3cd5c6963405a3bc54a68c3cabb" exitCode=0 Dec 11 18:13:48 crc kubenswrapper[4877]: I1211 18:13:48.176545 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2sg4m" event={"ID":"e75b60e5-f320-42d0-8ebc-4aa90962ced4","Type":"ContainerDied","Data":"846bff8f78f4a746a55e4d5b03d7f68ddfc1d3cd5c6963405a3bc54a68c3cabb"} Dec 11 18:13:49 crc kubenswrapper[4877]: I1211 18:13:49.188189 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2sg4m" event={"ID":"e75b60e5-f320-42d0-8ebc-4aa90962ced4","Type":"ContainerStarted","Data":"a400fae2865542bc7b3e6e9822bbfcc0de50975b22904b549b3fd48edb825704"} Dec 11 18:13:49 crc kubenswrapper[4877]: I1211 18:13:49.188728 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2sg4m" event={"ID":"e75b60e5-f320-42d0-8ebc-4aa90962ced4","Type":"ContainerStarted","Data":"7ec1fb00df06c75a2aaa4e0464a78e4e42fdd7247b4300ccc1bb6a4f1dff3fbf"} Dec 11 18:13:49 crc kubenswrapper[4877]: I1211 18:13:49.188740 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2sg4m" event={"ID":"e75b60e5-f320-42d0-8ebc-4aa90962ced4","Type":"ContainerStarted","Data":"932edb5ebf5425bd296039cd578f6ddcad606b9a27eee2ac820676ae48284bf6"} Dec 11 18:13:49 crc kubenswrapper[4877]: I1211 18:13:49.188749 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2sg4m" event={"ID":"e75b60e5-f320-42d0-8ebc-4aa90962ced4","Type":"ContainerStarted","Data":"af733d50b4de4b8e4526db30c947c9e59b487eb736ac7268c747f66545cd09a3"} Dec 11 18:13:49 crc kubenswrapper[4877]: I1211 18:13:49.188757 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2sg4m" event={"ID":"e75b60e5-f320-42d0-8ebc-4aa90962ced4","Type":"ContainerStarted","Data":"f264e94b86c6c8fce9866639c909d17bcad0f9c75d855af3b873727d316d8190"} Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.202596 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2sg4m" event={"ID":"e75b60e5-f320-42d0-8ebc-4aa90962ced4","Type":"ContainerStarted","Data":"e56946c7cd19d87d26aff9d72e20fbe7e8a21ee9f56cea73d52c5c606025fd40"} Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.202822 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.223015 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-2sg4m" podStartSLOduration=6.642630981 podStartE2EDuration="15.222990545s" podCreationTimestamp="2025-12-11 18:13:35 +0000 UTC" firstStartedPulling="2025-12-11 18:13:36.433753574 +0000 UTC m=+777.459997618" lastFinishedPulling="2025-12-11 18:13:45.014113138 +0000 UTC m=+786.040357182" observedRunningTime="2025-12-11 18:13:50.221495 +0000 UTC m=+791.247739094" watchObservedRunningTime="2025-12-11 18:13:50.222990545 +0000 UTC m=+791.249234629" Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.506685 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gkv9p"] Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.507518 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gkv9p" Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.512004 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.512263 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.522068 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5dhwf" Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.530555 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gkv9p"] Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.624247 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crsp6\" (UniqueName: \"kubernetes.io/projected/b086fb68-d6f7-4e4b-9729-5de2a03d3d30-kube-api-access-crsp6\") pod \"openstack-operator-index-gkv9p\" (UID: \"b086fb68-d6f7-4e4b-9729-5de2a03d3d30\") " pod="openstack-operators/openstack-operator-index-gkv9p" Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.688596 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.725425 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crsp6\" (UniqueName: \"kubernetes.io/projected/b086fb68-d6f7-4e4b-9729-5de2a03d3d30-kube-api-access-crsp6\") pod \"openstack-operator-index-gkv9p\" (UID: \"b086fb68-d6f7-4e4b-9729-5de2a03d3d30\") " pod="openstack-operators/openstack-operator-index-gkv9p" Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.729344 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.757586 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crsp6\" (UniqueName: \"kubernetes.io/projected/b086fb68-d6f7-4e4b-9729-5de2a03d3d30-kube-api-access-crsp6\") pod \"openstack-operator-index-gkv9p\" (UID: \"b086fb68-d6f7-4e4b-9729-5de2a03d3d30\") " pod="openstack-operators/openstack-operator-index-gkv9p" Dec 11 18:13:50 crc kubenswrapper[4877]: I1211 18:13:50.828272 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gkv9p" Dec 11 18:13:51 crc kubenswrapper[4877]: I1211 18:13:51.169720 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gkv9p"] Dec 11 18:13:51 crc kubenswrapper[4877]: I1211 18:13:51.213460 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gkv9p" event={"ID":"b086fb68-d6f7-4e4b-9729-5de2a03d3d30","Type":"ContainerStarted","Data":"1c2c94ec5256ead75c64c0b7b78f3c0680443b1cdd66bc0ddfa0c14fa6a62236"} Dec 11 18:13:53 crc kubenswrapper[4877]: I1211 18:13:53.881081 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gkv9p"] Dec 11 18:13:54 crc kubenswrapper[4877]: I1211 18:13:54.487178 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nmcl7"] Dec 11 18:13:54 crc kubenswrapper[4877]: I1211 18:13:54.488335 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nmcl7" Dec 11 18:13:54 crc kubenswrapper[4877]: I1211 18:13:54.491813 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nmcl7"] Dec 11 18:13:54 crc kubenswrapper[4877]: I1211 18:13:54.586438 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92dcq\" (UniqueName: \"kubernetes.io/projected/05174a0d-198b-4dbc-847c-164453075d91-kube-api-access-92dcq\") pod \"openstack-operator-index-nmcl7\" (UID: \"05174a0d-198b-4dbc-847c-164453075d91\") " pod="openstack-operators/openstack-operator-index-nmcl7" Dec 11 18:13:54 crc kubenswrapper[4877]: I1211 18:13:54.688428 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92dcq\" (UniqueName: \"kubernetes.io/projected/05174a0d-198b-4dbc-847c-164453075d91-kube-api-access-92dcq\") pod \"openstack-operator-index-nmcl7\" (UID: \"05174a0d-198b-4dbc-847c-164453075d91\") " pod="openstack-operators/openstack-operator-index-nmcl7" Dec 11 18:13:54 crc kubenswrapper[4877]: I1211 18:13:54.725305 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92dcq\" (UniqueName: \"kubernetes.io/projected/05174a0d-198b-4dbc-847c-164453075d91-kube-api-access-92dcq\") pod \"openstack-operator-index-nmcl7\" (UID: \"05174a0d-198b-4dbc-847c-164453075d91\") " pod="openstack-operators/openstack-operator-index-nmcl7" Dec 11 18:13:54 crc kubenswrapper[4877]: I1211 18:13:54.814076 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nmcl7" Dec 11 18:13:55 crc kubenswrapper[4877]: I1211 18:13:55.247130 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gkv9p" event={"ID":"b086fb68-d6f7-4e4b-9729-5de2a03d3d30","Type":"ContainerStarted","Data":"35e89227efb68f4e6419fbe481b90653f34277ac64e885bab9441e6b274d687e"} Dec 11 18:13:55 crc kubenswrapper[4877]: I1211 18:13:55.247402 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-gkv9p" podUID="b086fb68-d6f7-4e4b-9729-5de2a03d3d30" containerName="registry-server" containerID="cri-o://35e89227efb68f4e6419fbe481b90653f34277ac64e885bab9441e6b274d687e" gracePeriod=2 Dec 11 18:13:55 crc kubenswrapper[4877]: I1211 18:13:55.272782 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gkv9p" podStartSLOduration=1.477532402 podStartE2EDuration="5.272762086s" podCreationTimestamp="2025-12-11 18:13:50 +0000 UTC" firstStartedPulling="2025-12-11 18:13:51.175963346 +0000 UTC m=+792.202207390" lastFinishedPulling="2025-12-11 18:13:54.97119302 +0000 UTC m=+795.997437074" observedRunningTime="2025-12-11 18:13:55.270768279 +0000 UTC m=+796.297012323" watchObservedRunningTime="2025-12-11 18:13:55.272762086 +0000 UTC m=+796.299006120" Dec 11 18:13:55 crc kubenswrapper[4877]: I1211 18:13:55.351353 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nmcl7"] Dec 11 18:13:55 crc kubenswrapper[4877]: I1211 18:13:55.597367 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gkv9p" Dec 11 18:13:55 crc kubenswrapper[4877]: I1211 18:13:55.707797 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-v5sxh" Dec 11 18:13:55 crc kubenswrapper[4877]: I1211 18:13:55.708023 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crsp6\" (UniqueName: \"kubernetes.io/projected/b086fb68-d6f7-4e4b-9729-5de2a03d3d30-kube-api-access-crsp6\") pod \"b086fb68-d6f7-4e4b-9729-5de2a03d3d30\" (UID: \"b086fb68-d6f7-4e4b-9729-5de2a03d3d30\") " Dec 11 18:13:55 crc kubenswrapper[4877]: I1211 18:13:55.715406 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b086fb68-d6f7-4e4b-9729-5de2a03d3d30-kube-api-access-crsp6" (OuterVolumeSpecName: "kube-api-access-crsp6") pod "b086fb68-d6f7-4e4b-9729-5de2a03d3d30" (UID: "b086fb68-d6f7-4e4b-9729-5de2a03d3d30"). InnerVolumeSpecName "kube-api-access-crsp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:13:55 crc kubenswrapper[4877]: I1211 18:13:55.809775 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crsp6\" (UniqueName: \"kubernetes.io/projected/b086fb68-d6f7-4e4b-9729-5de2a03d3d30-kube-api-access-crsp6\") on node \"crc\" DevicePath \"\"" Dec 11 18:13:55 crc kubenswrapper[4877]: I1211 18:13:55.841103 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-jtbmt" Dec 11 18:13:56 crc kubenswrapper[4877]: I1211 18:13:56.254654 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nmcl7" event={"ID":"05174a0d-198b-4dbc-847c-164453075d91","Type":"ContainerStarted","Data":"c72d5f92f2e8c9e057497ff4a4e2a7964cbb38870bce5964fb57e481d211cd0e"} Dec 11 18:13:56 crc kubenswrapper[4877]: I1211 18:13:56.254713 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nmcl7" event={"ID":"05174a0d-198b-4dbc-847c-164453075d91","Type":"ContainerStarted","Data":"8160d05b3bb1db347fdc52e6055e1324e94e725462f84d39dd84186faf95e10a"} Dec 11 18:13:56 crc kubenswrapper[4877]: I1211 18:13:56.256396 4877 generic.go:334] "Generic (PLEG): container finished" podID="b086fb68-d6f7-4e4b-9729-5de2a03d3d30" containerID="35e89227efb68f4e6419fbe481b90653f34277ac64e885bab9441e6b274d687e" exitCode=0 Dec 11 18:13:56 crc kubenswrapper[4877]: I1211 18:13:56.256434 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gkv9p" event={"ID":"b086fb68-d6f7-4e4b-9729-5de2a03d3d30","Type":"ContainerDied","Data":"35e89227efb68f4e6419fbe481b90653f34277ac64e885bab9441e6b274d687e"} Dec 11 18:13:56 crc kubenswrapper[4877]: I1211 18:13:56.256456 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gkv9p" event={"ID":"b086fb68-d6f7-4e4b-9729-5de2a03d3d30","Type":"ContainerDied","Data":"1c2c94ec5256ead75c64c0b7b78f3c0680443b1cdd66bc0ddfa0c14fa6a62236"} Dec 11 18:13:56 crc kubenswrapper[4877]: I1211 18:13:56.256485 4877 scope.go:117] "RemoveContainer" containerID="35e89227efb68f4e6419fbe481b90653f34277ac64e885bab9441e6b274d687e" Dec 11 18:13:56 crc kubenswrapper[4877]: I1211 18:13:56.256483 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gkv9p" Dec 11 18:13:56 crc kubenswrapper[4877]: I1211 18:13:56.279076 4877 scope.go:117] "RemoveContainer" containerID="35e89227efb68f4e6419fbe481b90653f34277ac64e885bab9441e6b274d687e" Dec 11 18:13:56 crc kubenswrapper[4877]: E1211 18:13:56.279594 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35e89227efb68f4e6419fbe481b90653f34277ac64e885bab9441e6b274d687e\": container with ID starting with 35e89227efb68f4e6419fbe481b90653f34277ac64e885bab9441e6b274d687e not found: ID does not exist" containerID="35e89227efb68f4e6419fbe481b90653f34277ac64e885bab9441e6b274d687e" Dec 11 18:13:56 crc kubenswrapper[4877]: I1211 18:13:56.279639 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e89227efb68f4e6419fbe481b90653f34277ac64e885bab9441e6b274d687e"} err="failed to get container status \"35e89227efb68f4e6419fbe481b90653f34277ac64e885bab9441e6b274d687e\": rpc error: code = NotFound desc = could not find container \"35e89227efb68f4e6419fbe481b90653f34277ac64e885bab9441e6b274d687e\": container with ID starting with 35e89227efb68f4e6419fbe481b90653f34277ac64e885bab9441e6b274d687e not found: ID does not exist" Dec 11 18:13:56 crc kubenswrapper[4877]: I1211 18:13:56.296032 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nmcl7" podStartSLOduration=2.244204067 podStartE2EDuration="2.296001133s" podCreationTimestamp="2025-12-11 18:13:54 +0000 UTC" firstStartedPulling="2025-12-11 18:13:55.380339644 +0000 UTC m=+796.406583688" lastFinishedPulling="2025-12-11 18:13:55.43213671 +0000 UTC m=+796.458380754" observedRunningTime="2025-12-11 18:13:56.290083652 +0000 UTC m=+797.316327696" watchObservedRunningTime="2025-12-11 18:13:56.296001133 +0000 UTC m=+797.322245177" Dec 11 18:13:56 crc kubenswrapper[4877]: I1211 18:13:56.309407 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gkv9p"] Dec 11 18:13:56 crc kubenswrapper[4877]: I1211 18:13:56.312623 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-gkv9p"] Dec 11 18:13:57 crc kubenswrapper[4877]: I1211 18:13:57.226583 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b086fb68-d6f7-4e4b-9729-5de2a03d3d30" path="/var/lib/kubelet/pods/b086fb68-d6f7-4e4b-9729-5de2a03d3d30/volumes" Dec 11 18:14:04 crc kubenswrapper[4877]: I1211 18:14:04.814477 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nmcl7" Dec 11 18:14:04 crc kubenswrapper[4877]: I1211 18:14:04.816972 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nmcl7" Dec 11 18:14:04 crc kubenswrapper[4877]: I1211 18:14:04.847258 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nmcl7" Dec 11 18:14:05 crc kubenswrapper[4877]: I1211 18:14:05.350622 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nmcl7" Dec 11 18:14:05 crc kubenswrapper[4877]: I1211 18:14:05.693079 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-2sg4m" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.124130 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52"] Dec 11 18:14:06 crc kubenswrapper[4877]: E1211 18:14:06.125344 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b086fb68-d6f7-4e4b-9729-5de2a03d3d30" containerName="registry-server" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.125495 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="b086fb68-d6f7-4e4b-9729-5de2a03d3d30" containerName="registry-server" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.125683 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="b086fb68-d6f7-4e4b-9729-5de2a03d3d30" containerName="registry-server" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.126915 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.130185 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-q5k54" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.143485 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52"] Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.269619 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d106ca54-ea59-4bc1-9b2e-309981ea2055-bundle\") pod \"196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52\" (UID: \"d106ca54-ea59-4bc1-9b2e-309981ea2055\") " pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.269700 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d106ca54-ea59-4bc1-9b2e-309981ea2055-util\") pod \"196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52\" (UID: \"d106ca54-ea59-4bc1-9b2e-309981ea2055\") " pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.270152 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkmx2\" (UniqueName: \"kubernetes.io/projected/d106ca54-ea59-4bc1-9b2e-309981ea2055-kube-api-access-gkmx2\") pod \"196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52\" (UID: \"d106ca54-ea59-4bc1-9b2e-309981ea2055\") " pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.372342 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkmx2\" (UniqueName: \"kubernetes.io/projected/d106ca54-ea59-4bc1-9b2e-309981ea2055-kube-api-access-gkmx2\") pod \"196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52\" (UID: \"d106ca54-ea59-4bc1-9b2e-309981ea2055\") " pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.372453 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d106ca54-ea59-4bc1-9b2e-309981ea2055-bundle\") pod \"196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52\" (UID: \"d106ca54-ea59-4bc1-9b2e-309981ea2055\") " pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.372483 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d106ca54-ea59-4bc1-9b2e-309981ea2055-util\") pod \"196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52\" (UID: \"d106ca54-ea59-4bc1-9b2e-309981ea2055\") " pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.373000 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d106ca54-ea59-4bc1-9b2e-309981ea2055-bundle\") pod \"196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52\" (UID: \"d106ca54-ea59-4bc1-9b2e-309981ea2055\") " pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.373090 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d106ca54-ea59-4bc1-9b2e-309981ea2055-util\") pod \"196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52\" (UID: \"d106ca54-ea59-4bc1-9b2e-309981ea2055\") " pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.396494 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkmx2\" (UniqueName: \"kubernetes.io/projected/d106ca54-ea59-4bc1-9b2e-309981ea2055-kube-api-access-gkmx2\") pod \"196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52\" (UID: \"d106ca54-ea59-4bc1-9b2e-309981ea2055\") " pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.452490 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" Dec 11 18:14:06 crc kubenswrapper[4877]: I1211 18:14:06.865473 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52"] Dec 11 18:14:07 crc kubenswrapper[4877]: I1211 18:14:07.335231 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" event={"ID":"d106ca54-ea59-4bc1-9b2e-309981ea2055","Type":"ContainerStarted","Data":"024016736f95201fc7111a8804ec520a53dafc6d34e79a66b052e2a63368bbac"} Dec 11 18:14:10 crc kubenswrapper[4877]: I1211 18:14:10.363255 4877 generic.go:334] "Generic (PLEG): container finished" podID="d106ca54-ea59-4bc1-9b2e-309981ea2055" containerID="2109a1de2edb7ca8e9cd910168c0d3d6df1a90dca81a7608c5c53dc1a1fef248" exitCode=0 Dec 11 18:14:10 crc kubenswrapper[4877]: I1211 18:14:10.363390 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" event={"ID":"d106ca54-ea59-4bc1-9b2e-309981ea2055","Type":"ContainerDied","Data":"2109a1de2edb7ca8e9cd910168c0d3d6df1a90dca81a7608c5c53dc1a1fef248"} Dec 11 18:14:11 crc kubenswrapper[4877]: I1211 18:14:11.374249 4877 generic.go:334] "Generic (PLEG): container finished" podID="d106ca54-ea59-4bc1-9b2e-309981ea2055" containerID="35bcdf552c42999d44fa1ef3b2721f76be1dba57bf7223d0e3b0de130d11f394" exitCode=0 Dec 11 18:14:11 crc kubenswrapper[4877]: I1211 18:14:11.374331 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" event={"ID":"d106ca54-ea59-4bc1-9b2e-309981ea2055","Type":"ContainerDied","Data":"35bcdf552c42999d44fa1ef3b2721f76be1dba57bf7223d0e3b0de130d11f394"} Dec 11 18:14:12 crc kubenswrapper[4877]: I1211 18:14:12.386782 4877 generic.go:334] "Generic (PLEG): container finished" podID="d106ca54-ea59-4bc1-9b2e-309981ea2055" containerID="cca266f88b4839cdd442090b5f246508c4d80dbc9eb9033b4f9edf23f8efe4ca" exitCode=0 Dec 11 18:14:12 crc kubenswrapper[4877]: I1211 18:14:12.386859 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" event={"ID":"d106ca54-ea59-4bc1-9b2e-309981ea2055","Type":"ContainerDied","Data":"cca266f88b4839cdd442090b5f246508c4d80dbc9eb9033b4f9edf23f8efe4ca"} Dec 11 18:14:13 crc kubenswrapper[4877]: I1211 18:14:13.787180 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" Dec 11 18:14:13 crc kubenswrapper[4877]: I1211 18:14:13.884248 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d106ca54-ea59-4bc1-9b2e-309981ea2055-util\") pod \"d106ca54-ea59-4bc1-9b2e-309981ea2055\" (UID: \"d106ca54-ea59-4bc1-9b2e-309981ea2055\") " Dec 11 18:14:13 crc kubenswrapper[4877]: I1211 18:14:13.884541 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkmx2\" (UniqueName: \"kubernetes.io/projected/d106ca54-ea59-4bc1-9b2e-309981ea2055-kube-api-access-gkmx2\") pod \"d106ca54-ea59-4bc1-9b2e-309981ea2055\" (UID: \"d106ca54-ea59-4bc1-9b2e-309981ea2055\") " Dec 11 18:14:13 crc kubenswrapper[4877]: I1211 18:14:13.884585 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d106ca54-ea59-4bc1-9b2e-309981ea2055-bundle\") pod \"d106ca54-ea59-4bc1-9b2e-309981ea2055\" (UID: \"d106ca54-ea59-4bc1-9b2e-309981ea2055\") " Dec 11 18:14:13 crc kubenswrapper[4877]: I1211 18:14:13.885808 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d106ca54-ea59-4bc1-9b2e-309981ea2055-bundle" (OuterVolumeSpecName: "bundle") pod "d106ca54-ea59-4bc1-9b2e-309981ea2055" (UID: "d106ca54-ea59-4bc1-9b2e-309981ea2055"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:14:13 crc kubenswrapper[4877]: I1211 18:14:13.894328 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d106ca54-ea59-4bc1-9b2e-309981ea2055-kube-api-access-gkmx2" (OuterVolumeSpecName: "kube-api-access-gkmx2") pod "d106ca54-ea59-4bc1-9b2e-309981ea2055" (UID: "d106ca54-ea59-4bc1-9b2e-309981ea2055"). InnerVolumeSpecName "kube-api-access-gkmx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:14:13 crc kubenswrapper[4877]: I1211 18:14:13.899240 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d106ca54-ea59-4bc1-9b2e-309981ea2055-util" (OuterVolumeSpecName: "util") pod "d106ca54-ea59-4bc1-9b2e-309981ea2055" (UID: "d106ca54-ea59-4bc1-9b2e-309981ea2055"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:14:13 crc kubenswrapper[4877]: I1211 18:14:13.986759 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkmx2\" (UniqueName: \"kubernetes.io/projected/d106ca54-ea59-4bc1-9b2e-309981ea2055-kube-api-access-gkmx2\") on node \"crc\" DevicePath \"\"" Dec 11 18:14:13 crc kubenswrapper[4877]: I1211 18:14:13.986841 4877 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d106ca54-ea59-4bc1-9b2e-309981ea2055-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:14:13 crc kubenswrapper[4877]: I1211 18:14:13.986859 4877 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d106ca54-ea59-4bc1-9b2e-309981ea2055-util\") on node \"crc\" DevicePath \"\"" Dec 11 18:14:14 crc kubenswrapper[4877]: I1211 18:14:14.407315 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" event={"ID":"d106ca54-ea59-4bc1-9b2e-309981ea2055","Type":"ContainerDied","Data":"024016736f95201fc7111a8804ec520a53dafc6d34e79a66b052e2a63368bbac"} Dec 11 18:14:14 crc kubenswrapper[4877]: I1211 18:14:14.407397 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="024016736f95201fc7111a8804ec520a53dafc6d34e79a66b052e2a63368bbac" Dec 11 18:14:14 crc kubenswrapper[4877]: I1211 18:14:14.407566 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52" Dec 11 18:14:16 crc kubenswrapper[4877]: I1211 18:14:16.637624 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:14:16 crc kubenswrapper[4877]: I1211 18:14:16.638187 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:14:16 crc kubenswrapper[4877]: I1211 18:14:16.638246 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:14:16 crc kubenswrapper[4877]: I1211 18:14:16.638913 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a27d4e9d33f15b85376f195565e3ec7c0836b3706bd7f69f9b9e7b666aacdfa3"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 18:14:16 crc kubenswrapper[4877]: I1211 18:14:16.638973 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://a27d4e9d33f15b85376f195565e3ec7c0836b3706bd7f69f9b9e7b666aacdfa3" gracePeriod=600 Dec 11 18:14:17 crc kubenswrapper[4877]: I1211 18:14:17.429508 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="a27d4e9d33f15b85376f195565e3ec7c0836b3706bd7f69f9b9e7b666aacdfa3" exitCode=0 Dec 11 18:14:17 crc kubenswrapper[4877]: I1211 18:14:17.429576 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"a27d4e9d33f15b85376f195565e3ec7c0836b3706bd7f69f9b9e7b666aacdfa3"} Dec 11 18:14:17 crc kubenswrapper[4877]: I1211 18:14:17.430675 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"bf1d9959e41610cc03f269ef917fbce5242b11790b9a8a9c1fa1169950769bd5"} Dec 11 18:14:17 crc kubenswrapper[4877]: I1211 18:14:17.430711 4877 scope.go:117] "RemoveContainer" containerID="1816f7724e069a16fa107148c0cd6974775d923c16f991a70e728e08891f0bb9" Dec 11 18:14:18 crc kubenswrapper[4877]: I1211 18:14:18.373747 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8c9b75f7c-ccsvg"] Dec 11 18:14:18 crc kubenswrapper[4877]: E1211 18:14:18.374026 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d106ca54-ea59-4bc1-9b2e-309981ea2055" containerName="pull" Dec 11 18:14:18 crc kubenswrapper[4877]: I1211 18:14:18.374040 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d106ca54-ea59-4bc1-9b2e-309981ea2055" containerName="pull" Dec 11 18:14:18 crc kubenswrapper[4877]: E1211 18:14:18.374056 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d106ca54-ea59-4bc1-9b2e-309981ea2055" containerName="extract" Dec 11 18:14:18 crc kubenswrapper[4877]: I1211 18:14:18.374062 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d106ca54-ea59-4bc1-9b2e-309981ea2055" containerName="extract" Dec 11 18:14:18 crc kubenswrapper[4877]: E1211 18:14:18.374075 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d106ca54-ea59-4bc1-9b2e-309981ea2055" containerName="util" Dec 11 18:14:18 crc kubenswrapper[4877]: I1211 18:14:18.374082 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d106ca54-ea59-4bc1-9b2e-309981ea2055" containerName="util" Dec 11 18:14:18 crc kubenswrapper[4877]: I1211 18:14:18.374201 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="d106ca54-ea59-4bc1-9b2e-309981ea2055" containerName="extract" Dec 11 18:14:18 crc kubenswrapper[4877]: I1211 18:14:18.374692 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8c9b75f7c-ccsvg" Dec 11 18:14:18 crc kubenswrapper[4877]: I1211 18:14:18.377615 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-mcdlp" Dec 11 18:14:18 crc kubenswrapper[4877]: I1211 18:14:18.411027 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8c9b75f7c-ccsvg"] Dec 11 18:14:18 crc kubenswrapper[4877]: I1211 18:14:18.455294 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5brn8\" (UniqueName: \"kubernetes.io/projected/fcdb3a3a-e3a4-42ce-a44a-eb34c2eda169-kube-api-access-5brn8\") pod \"openstack-operator-controller-operator-8c9b75f7c-ccsvg\" (UID: \"fcdb3a3a-e3a4-42ce-a44a-eb34c2eda169\") " pod="openstack-operators/openstack-operator-controller-operator-8c9b75f7c-ccsvg" Dec 11 18:14:18 crc kubenswrapper[4877]: I1211 18:14:18.558419 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5brn8\" (UniqueName: \"kubernetes.io/projected/fcdb3a3a-e3a4-42ce-a44a-eb34c2eda169-kube-api-access-5brn8\") pod \"openstack-operator-controller-operator-8c9b75f7c-ccsvg\" (UID: \"fcdb3a3a-e3a4-42ce-a44a-eb34c2eda169\") " pod="openstack-operators/openstack-operator-controller-operator-8c9b75f7c-ccsvg" Dec 11 18:14:18 crc kubenswrapper[4877]: I1211 18:14:18.582201 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5brn8\" (UniqueName: \"kubernetes.io/projected/fcdb3a3a-e3a4-42ce-a44a-eb34c2eda169-kube-api-access-5brn8\") pod \"openstack-operator-controller-operator-8c9b75f7c-ccsvg\" (UID: \"fcdb3a3a-e3a4-42ce-a44a-eb34c2eda169\") " pod="openstack-operators/openstack-operator-controller-operator-8c9b75f7c-ccsvg" Dec 11 18:14:18 crc kubenswrapper[4877]: I1211 18:14:18.696065 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8c9b75f7c-ccsvg" Dec 11 18:14:19 crc kubenswrapper[4877]: I1211 18:14:19.062479 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8c9b75f7c-ccsvg"] Dec 11 18:14:19 crc kubenswrapper[4877]: W1211 18:14:19.067536 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcdb3a3a_e3a4_42ce_a44a_eb34c2eda169.slice/crio-6b682ce76871d5dcfb3ea8c59e5af5b525cf7f077ab84642b5066a9629b0f37d WatchSource:0}: Error finding container 6b682ce76871d5dcfb3ea8c59e5af5b525cf7f077ab84642b5066a9629b0f37d: Status 404 returned error can't find the container with id 6b682ce76871d5dcfb3ea8c59e5af5b525cf7f077ab84642b5066a9629b0f37d Dec 11 18:14:19 crc kubenswrapper[4877]: I1211 18:14:19.451446 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8c9b75f7c-ccsvg" event={"ID":"fcdb3a3a-e3a4-42ce-a44a-eb34c2eda169","Type":"ContainerStarted","Data":"6b682ce76871d5dcfb3ea8c59e5af5b525cf7f077ab84642b5066a9629b0f37d"} Dec 11 18:14:23 crc kubenswrapper[4877]: I1211 18:14:23.487178 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8c9b75f7c-ccsvg" event={"ID":"fcdb3a3a-e3a4-42ce-a44a-eb34c2eda169","Type":"ContainerStarted","Data":"599fbe8ba425428b8dd00afa407857094db79977e41f7da1974a1c7200e30327"} Dec 11 18:14:23 crc kubenswrapper[4877]: I1211 18:14:23.488360 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-8c9b75f7c-ccsvg" Dec 11 18:14:28 crc kubenswrapper[4877]: I1211 18:14:28.699016 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-8c9b75f7c-ccsvg" Dec 11 18:14:28 crc kubenswrapper[4877]: I1211 18:14:28.752342 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-8c9b75f7c-ccsvg" podStartSLOduration=6.791950965 podStartE2EDuration="10.75231974s" podCreationTimestamp="2025-12-11 18:14:18 +0000 UTC" firstStartedPulling="2025-12-11 18:14:19.070632399 +0000 UTC m=+820.096876443" lastFinishedPulling="2025-12-11 18:14:23.031001174 +0000 UTC m=+824.057245218" observedRunningTime="2025-12-11 18:14:23.540670836 +0000 UTC m=+824.566914880" watchObservedRunningTime="2025-12-11 18:14:28.75231974 +0000 UTC m=+829.778563774" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.514134 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.520601 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.523927 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-6275n" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.526856 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-h429c"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.528167 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-h429c" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.535186 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.535307 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-lc6fm" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.559097 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.560328 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.566286 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-r7cn4" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.575220 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-h429c"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.604462 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-nf9dm"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.606150 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-nf9dm" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.606990 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.608731 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-t56j9" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.630603 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.631758 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.633971 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-c7qtd" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.642561 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-nf9dm"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.659531 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmzk5\" (UniqueName: \"kubernetes.io/projected/c3dd6849-836b-462c-abbc-d97418287658-kube-api-access-qmzk5\") pod \"barbican-operator-controller-manager-7d9dfd778-ddr9m\" (UID: \"c3dd6849-836b-462c-abbc-d97418287658\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.659608 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swhpw\" (UniqueName: \"kubernetes.io/projected/105c0e29-3d26-49ee-83f1-9ac47ec17cfd-kube-api-access-swhpw\") pod \"designate-operator-controller-manager-697fb699cf-hsvv2\" (UID: \"105c0e29-3d26-49ee-83f1-9ac47ec17cfd\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.659676 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nczv\" (UniqueName: \"kubernetes.io/projected/444a3f0d-8828-4958-9d25-61f4251d74c4-kube-api-access-8nczv\") pod \"cinder-operator-controller-manager-6c677c69b-h429c\" (UID: \"444a3f0d-8828-4958-9d25-61f4251d74c4\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-h429c" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.662061 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.677441 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xtk9z"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.678741 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xtk9z" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.683836 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-9dvnw" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.698632 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.699728 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.706991 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.707163 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-rl7r5" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.710435 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xtk9z"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.722618 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.736612 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.738017 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.740271 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-vkg8s" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.741313 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.749461 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.750928 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.753070 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rdr78" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.757531 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.758858 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.781826 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gr8ns" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.786683 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nczv\" (UniqueName: \"kubernetes.io/projected/444a3f0d-8828-4958-9d25-61f4251d74c4-kube-api-access-8nczv\") pod \"cinder-operator-controller-manager-6c677c69b-h429c\" (UID: \"444a3f0d-8828-4958-9d25-61f4251d74c4\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-h429c" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.786850 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp9m5\" (UniqueName: \"kubernetes.io/projected/fd25a8fc-0f52-4795-8a65-debdfdf452b3-kube-api-access-wp9m5\") pod \"heat-operator-controller-manager-5f64f6f8bb-c4nfg\" (UID: \"fd25a8fc-0f52-4795-8a65-debdfdf452b3\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.786909 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzn9w\" (UniqueName: \"kubernetes.io/projected/18dda364-66d0-47d3-8c03-4b0ecb73a634-kube-api-access-hzn9w\") pod \"horizon-operator-controller-manager-68c6d99b8f-xtk9z\" (UID: \"18dda364-66d0-47d3-8c03-4b0ecb73a634\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xtk9z" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.786953 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmzk5\" (UniqueName: \"kubernetes.io/projected/c3dd6849-836b-462c-abbc-d97418287658-kube-api-access-qmzk5\") pod \"barbican-operator-controller-manager-7d9dfd778-ddr9m\" (UID: \"c3dd6849-836b-462c-abbc-d97418287658\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.787012 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swhpw\" (UniqueName: \"kubernetes.io/projected/105c0e29-3d26-49ee-83f1-9ac47ec17cfd-kube-api-access-swhpw\") pod \"designate-operator-controller-manager-697fb699cf-hsvv2\" (UID: \"105c0e29-3d26-49ee-83f1-9ac47ec17cfd\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.787043 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzg79\" (UniqueName: \"kubernetes.io/projected/3670e0c0-f188-4f22-8097-52f0a00b3a47-kube-api-access-tzg79\") pod \"glance-operator-controller-manager-5697bb5779-nf9dm\" (UID: \"3670e0c0-f188-4f22-8097-52f0a00b3a47\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-nf9dm" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.876136 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.876955 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nczv\" (UniqueName: \"kubernetes.io/projected/444a3f0d-8828-4958-9d25-61f4251d74c4-kube-api-access-8nczv\") pod \"cinder-operator-controller-manager-6c677c69b-h429c\" (UID: \"444a3f0d-8828-4958-9d25-61f4251d74c4\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-h429c" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.877453 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmzk5\" (UniqueName: \"kubernetes.io/projected/c3dd6849-836b-462c-abbc-d97418287658-kube-api-access-qmzk5\") pod \"barbican-operator-controller-manager-7d9dfd778-ddr9m\" (UID: \"c3dd6849-836b-462c-abbc-d97418287658\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.877527 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swhpw\" (UniqueName: \"kubernetes.io/projected/105c0e29-3d26-49ee-83f1-9ac47ec17cfd-kube-api-access-swhpw\") pod \"designate-operator-controller-manager-697fb699cf-hsvv2\" (UID: \"105c0e29-3d26-49ee-83f1-9ac47ec17cfd\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.885134 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.887024 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-h429c" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.888589 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqj7m\" (UniqueName: \"kubernetes.io/projected/2fb4fbf5-e490-43ae-b7c5-8a2e481f7209-kube-api-access-qqj7m\") pod \"ironic-operator-controller-manager-967d97867-mcjpd\" (UID: \"2fb4fbf5-e490-43ae-b7c5-8a2e481f7209\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.888728 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp9m5\" (UniqueName: \"kubernetes.io/projected/fd25a8fc-0f52-4795-8a65-debdfdf452b3-kube-api-access-wp9m5\") pod \"heat-operator-controller-manager-5f64f6f8bb-c4nfg\" (UID: \"fd25a8fc-0f52-4795-8a65-debdfdf452b3\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.888825 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzn9w\" (UniqueName: \"kubernetes.io/projected/18dda364-66d0-47d3-8c03-4b0ecb73a634-kube-api-access-hzn9w\") pod \"horizon-operator-controller-manager-68c6d99b8f-xtk9z\" (UID: \"18dda364-66d0-47d3-8c03-4b0ecb73a634\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xtk9z" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.888927 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mp66\" (UniqueName: \"kubernetes.io/projected/6dcd317e-41f9-45e8-bd14-77d9f4ae25dd-kube-api-access-7mp66\") pod \"keystone-operator-controller-manager-7765d96ddf-tlsrq\" (UID: \"6dcd317e-41f9-45e8-bd14-77d9f4ae25dd\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.889010 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m2pk\" (UniqueName: \"kubernetes.io/projected/53a860ae-4169-4f47-8ba7-032c96b4be3a-kube-api-access-2m2pk\") pod \"infra-operator-controller-manager-6797f5b887-q9vgk\" (UID: \"53a860ae-4169-4f47-8ba7-032c96b4be3a\") " pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.889101 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzg79\" (UniqueName: \"kubernetes.io/projected/3670e0c0-f188-4f22-8097-52f0a00b3a47-kube-api-access-tzg79\") pod \"glance-operator-controller-manager-5697bb5779-nf9dm\" (UID: \"3670e0c0-f188-4f22-8097-52f0a00b3a47\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-nf9dm" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.889173 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bgct\" (UniqueName: \"kubernetes.io/projected/e0378195-6809-4f5c-b9f3-a37177789ee5-kube-api-access-7bgct\") pod \"manila-operator-controller-manager-5b5fd79c9c-nprs8\" (UID: \"e0378195-6809-4f5c-b9f3-a37177789ee5\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.889250 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert\") pod \"infra-operator-controller-manager-6797f5b887-q9vgk\" (UID: \"53a860ae-4169-4f47-8ba7-032c96b4be3a\") " pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.896350 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.897616 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.904907 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-df56t" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.913684 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.924606 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.925278 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzg79\" (UniqueName: \"kubernetes.io/projected/3670e0c0-f188-4f22-8097-52f0a00b3a47-kube-api-access-tzg79\") pod \"glance-operator-controller-manager-5697bb5779-nf9dm\" (UID: \"3670e0c0-f188-4f22-8097-52f0a00b3a47\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-nf9dm" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.925567 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzn9w\" (UniqueName: \"kubernetes.io/projected/18dda364-66d0-47d3-8c03-4b0ecb73a634-kube-api-access-hzn9w\") pod \"horizon-operator-controller-manager-68c6d99b8f-xtk9z\" (UID: \"18dda364-66d0-47d3-8c03-4b0ecb73a634\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xtk9z" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.929496 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp9m5\" (UniqueName: \"kubernetes.io/projected/fd25a8fc-0f52-4795-8a65-debdfdf452b3-kube-api-access-wp9m5\") pod \"heat-operator-controller-manager-5f64f6f8bb-c4nfg\" (UID: \"fd25a8fc-0f52-4795-8a65-debdfdf452b3\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.932490 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-nf9dm" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.934474 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.936333 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.942191 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.943493 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.943923 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mpd8j" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.948161 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-nkw4t" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.952884 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.955994 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.967676 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.974451 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-p97xh"] Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.975667 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-p97xh" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.979712 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-hwrhk" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.991350 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bgct\" (UniqueName: \"kubernetes.io/projected/e0378195-6809-4f5c-b9f3-a37177789ee5-kube-api-access-7bgct\") pod \"manila-operator-controller-manager-5b5fd79c9c-nprs8\" (UID: \"e0378195-6809-4f5c-b9f3-a37177789ee5\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.991408 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert\") pod \"infra-operator-controller-manager-6797f5b887-q9vgk\" (UID: \"53a860ae-4169-4f47-8ba7-032c96b4be3a\") " pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.991439 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqj7m\" (UniqueName: \"kubernetes.io/projected/2fb4fbf5-e490-43ae-b7c5-8a2e481f7209-kube-api-access-qqj7m\") pod \"ironic-operator-controller-manager-967d97867-mcjpd\" (UID: \"2fb4fbf5-e490-43ae-b7c5-8a2e481f7209\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.991484 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mp66\" (UniqueName: \"kubernetes.io/projected/6dcd317e-41f9-45e8-bd14-77d9f4ae25dd-kube-api-access-7mp66\") pod \"keystone-operator-controller-manager-7765d96ddf-tlsrq\" (UID: \"6dcd317e-41f9-45e8-bd14-77d9f4ae25dd\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq" Dec 11 18:14:48 crc kubenswrapper[4877]: I1211 18:14:48.991514 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m2pk\" (UniqueName: \"kubernetes.io/projected/53a860ae-4169-4f47-8ba7-032c96b4be3a-kube-api-access-2m2pk\") pod \"infra-operator-controller-manager-6797f5b887-q9vgk\" (UID: \"53a860ae-4169-4f47-8ba7-032c96b4be3a\") " pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:14:48 crc kubenswrapper[4877]: E1211 18:14:48.992008 4877 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 18:14:48 crc kubenswrapper[4877]: E1211 18:14:48.992068 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert podName:53a860ae-4169-4f47-8ba7-032c96b4be3a nodeName:}" failed. No retries permitted until 2025-12-11 18:14:49.492048803 +0000 UTC m=+850.518292847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert") pod "infra-operator-controller-manager-6797f5b887-q9vgk" (UID: "53a860ae-4169-4f47-8ba7-032c96b4be3a") : secret "infra-operator-webhook-server-cert" not found Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.006763 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-p97xh"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.008783 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xtk9z" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.035036 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m2pk\" (UniqueName: \"kubernetes.io/projected/53a860ae-4169-4f47-8ba7-032c96b4be3a-kube-api-access-2m2pk\") pod \"infra-operator-controller-manager-6797f5b887-q9vgk\" (UID: \"53a860ae-4169-4f47-8ba7-032c96b4be3a\") " pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.038088 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mp66\" (UniqueName: \"kubernetes.io/projected/6dcd317e-41f9-45e8-bd14-77d9f4ae25dd-kube-api-access-7mp66\") pod \"keystone-operator-controller-manager-7765d96ddf-tlsrq\" (UID: \"6dcd317e-41f9-45e8-bd14-77d9f4ae25dd\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.038402 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.040082 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.044158 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqj7m\" (UniqueName: \"kubernetes.io/projected/2fb4fbf5-e490-43ae-b7c5-8a2e481f7209-kube-api-access-qqj7m\") pod \"ironic-operator-controller-manager-967d97867-mcjpd\" (UID: \"2fb4fbf5-e490-43ae-b7c5-8a2e481f7209\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.044691 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bgct\" (UniqueName: \"kubernetes.io/projected/e0378195-6809-4f5c-b9f3-a37177789ee5-kube-api-access-7bgct\") pod \"manila-operator-controller-manager-5b5fd79c9c-nprs8\" (UID: \"e0378195-6809-4f5c-b9f3-a37177789ee5\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.060534 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-pdqjz" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.078459 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.079840 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.083512 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-wldgs"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.084263 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.084323 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.084516 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-krsnt" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.088020 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wldgs" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.106023 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.106983 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-q2q2w" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.108211 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.110054 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42znk\" (UniqueName: \"kubernetes.io/projected/8675dcf8-097e-4927-aa50-827f3034af41-kube-api-access-42znk\") pod \"placement-operator-controller-manager-78f8948974-wldgs\" (UID: \"8675dcf8-097e-4927-aa50-827f3034af41\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-wldgs" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.110131 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csddr\" (UniqueName: \"kubernetes.io/projected/e525cb88-4985-4374-a7f8-185c016e4a14-kube-api-access-csddr\") pod \"openstack-baremetal-operator-controller-manager-84b575879f4zhrq\" (UID: \"e525cb88-4985-4374-a7f8-185c016e4a14\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.110155 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f4zhrq\" (UID: \"e525cb88-4985-4374-a7f8-185c016e4a14\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.110202 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9h57\" (UniqueName: \"kubernetes.io/projected/099a6c32-cea0-4cea-b763-f60ba3e867e7-kube-api-access-c9h57\") pod \"nova-operator-controller-manager-697bc559fc-8vf65\" (UID: \"099a6c32-cea0-4cea-b763-f60ba3e867e7\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.110227 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d59jc\" (UniqueName: \"kubernetes.io/projected/efcf4499-fc58-4b4c-b047-c397b6154e38-kube-api-access-d59jc\") pod \"octavia-operator-controller-manager-998648c74-p97xh\" (UID: \"efcf4499-fc58-4b4c-b047-c397b6154e38\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-p97xh" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.110261 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfblf\" (UniqueName: \"kubernetes.io/projected/243bfab6-eced-4740-87ce-ab61441881f5-kube-api-access-rfblf\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-qn5w6\" (UID: \"243bfab6-eced-4740-87ce-ab61441881f5\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.110301 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg7qb\" (UniqueName: \"kubernetes.io/projected/1ebbc540-13e3-4fee-a9b7-10bb95da50b9-kube-api-access-fg7qb\") pod \"ovn-operator-controller-manager-b6456fdb6-7pc7q\" (UID: \"1ebbc540-13e3-4fee-a9b7-10bb95da50b9\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.110346 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkvvh\" (UniqueName: \"kubernetes.io/projected/f7fa49af-7b01-4972-aec4-5b2b42dee85f-kube-api-access-dkvvh\") pod \"mariadb-operator-controller-manager-79c8c4686c-q25n5\" (UID: \"f7fa49af-7b01-4972-aec4-5b2b42dee85f\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.123476 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.137030 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-wldgs"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.160539 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.183516 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.185342 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.188819 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jq9fc" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.212420 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg7qb\" (UniqueName: \"kubernetes.io/projected/1ebbc540-13e3-4fee-a9b7-10bb95da50b9-kube-api-access-fg7qb\") pod \"ovn-operator-controller-manager-b6456fdb6-7pc7q\" (UID: \"1ebbc540-13e3-4fee-a9b7-10bb95da50b9\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.212492 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkvvh\" (UniqueName: \"kubernetes.io/projected/f7fa49af-7b01-4972-aec4-5b2b42dee85f-kube-api-access-dkvvh\") pod \"mariadb-operator-controller-manager-79c8c4686c-q25n5\" (UID: \"f7fa49af-7b01-4972-aec4-5b2b42dee85f\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.212532 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42znk\" (UniqueName: \"kubernetes.io/projected/8675dcf8-097e-4927-aa50-827f3034af41-kube-api-access-42znk\") pod \"placement-operator-controller-manager-78f8948974-wldgs\" (UID: \"8675dcf8-097e-4927-aa50-827f3034af41\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-wldgs" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.212585 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csddr\" (UniqueName: \"kubernetes.io/projected/e525cb88-4985-4374-a7f8-185c016e4a14-kube-api-access-csddr\") pod \"openstack-baremetal-operator-controller-manager-84b575879f4zhrq\" (UID: \"e525cb88-4985-4374-a7f8-185c016e4a14\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.212609 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f4zhrq\" (UID: \"e525cb88-4985-4374-a7f8-185c016e4a14\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.212646 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pcrt\" (UniqueName: \"kubernetes.io/projected/ca3c6f4e-3491-4109-bf60-f4efbad58bc1-kube-api-access-6pcrt\") pod \"swift-operator-controller-manager-9d58d64bc-dmlvl\" (UID: \"ca3c6f4e-3491-4109-bf60-f4efbad58bc1\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.212692 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9h57\" (UniqueName: \"kubernetes.io/projected/099a6c32-cea0-4cea-b763-f60ba3e867e7-kube-api-access-c9h57\") pod \"nova-operator-controller-manager-697bc559fc-8vf65\" (UID: \"099a6c32-cea0-4cea-b763-f60ba3e867e7\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.212726 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d59jc\" (UniqueName: \"kubernetes.io/projected/efcf4499-fc58-4b4c-b047-c397b6154e38-kube-api-access-d59jc\") pod \"octavia-operator-controller-manager-998648c74-p97xh\" (UID: \"efcf4499-fc58-4b4c-b047-c397b6154e38\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-p97xh" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.212770 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfblf\" (UniqueName: \"kubernetes.io/projected/243bfab6-eced-4740-87ce-ab61441881f5-kube-api-access-rfblf\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-qn5w6\" (UID: \"243bfab6-eced-4740-87ce-ab61441881f5\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" Dec 11 18:14:49 crc kubenswrapper[4877]: E1211 18:14:49.213904 4877 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 18:14:49 crc kubenswrapper[4877]: E1211 18:14:49.213960 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert podName:e525cb88-4985-4374-a7f8-185c016e4a14 nodeName:}" failed. No retries permitted until 2025-12-11 18:14:49.713942068 +0000 UTC m=+850.740186112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f4zhrq" (UID: "e525cb88-4985-4374-a7f8-185c016e4a14") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.239455 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfblf\" (UniqueName: \"kubernetes.io/projected/243bfab6-eced-4740-87ce-ab61441881f5-kube-api-access-rfblf\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-qn5w6\" (UID: \"243bfab6-eced-4740-87ce-ab61441881f5\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.278654 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg7qb\" (UniqueName: \"kubernetes.io/projected/1ebbc540-13e3-4fee-a9b7-10bb95da50b9-kube-api-access-fg7qb\") pod \"ovn-operator-controller-manager-b6456fdb6-7pc7q\" (UID: \"1ebbc540-13e3-4fee-a9b7-10bb95da50b9\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.299123 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.299163 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.306314 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csddr\" (UniqueName: \"kubernetes.io/projected/e525cb88-4985-4374-a7f8-185c016e4a14-kube-api-access-csddr\") pod \"openstack-baremetal-operator-controller-manager-84b575879f4zhrq\" (UID: \"e525cb88-4985-4374-a7f8-185c016e4a14\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.307313 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkvvh\" (UniqueName: \"kubernetes.io/projected/f7fa49af-7b01-4972-aec4-5b2b42dee85f-kube-api-access-dkvvh\") pod \"mariadb-operator-controller-manager-79c8c4686c-q25n5\" (UID: \"f7fa49af-7b01-4972-aec4-5b2b42dee85f\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.308584 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9h57\" (UniqueName: \"kubernetes.io/projected/099a6c32-cea0-4cea-b763-f60ba3e867e7-kube-api-access-c9h57\") pod \"nova-operator-controller-manager-697bc559fc-8vf65\" (UID: \"099a6c32-cea0-4cea-b763-f60ba3e867e7\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.313025 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42znk\" (UniqueName: \"kubernetes.io/projected/8675dcf8-097e-4927-aa50-827f3034af41-kube-api-access-42znk\") pod \"placement-operator-controller-manager-78f8948974-wldgs\" (UID: \"8675dcf8-097e-4927-aa50-827f3034af41\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-wldgs" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.313162 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.315421 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.316274 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d59jc\" (UniqueName: \"kubernetes.io/projected/efcf4499-fc58-4b4c-b047-c397b6154e38-kube-api-access-d59jc\") pod \"octavia-operator-controller-manager-998648c74-p97xh\" (UID: \"efcf4499-fc58-4b4c-b047-c397b6154e38\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-p97xh" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.316532 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.317343 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pcrt\" (UniqueName: \"kubernetes.io/projected/ca3c6f4e-3491-4109-bf60-f4efbad58bc1-kube-api-access-6pcrt\") pod \"swift-operator-controller-manager-9d58d64bc-dmlvl\" (UID: \"ca3c6f4e-3491-4109-bf60-f4efbad58bc1\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.322188 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-p5rrn" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.363592 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.364418 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pcrt\" (UniqueName: \"kubernetes.io/projected/ca3c6f4e-3491-4109-bf60-f4efbad58bc1-kube-api-access-6pcrt\") pod \"swift-operator-controller-manager-9d58d64bc-dmlvl\" (UID: \"ca3c6f4e-3491-4109-bf60-f4efbad58bc1\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.365051 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.385090 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wldgs" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.403849 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.405301 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.440165 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-qvkj9" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.459070 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.464941 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.502050 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.538162 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.559797 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-zm6ts"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.561486 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-zm6ts" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.561760 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-p97xh" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.565178 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sq5nl" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.567305 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-zm6ts"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.588124 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmrb\" (UniqueName: \"kubernetes.io/projected/a571f5fa-fa44-48fd-b675-f0b42607ac7d-kube-api-access-kfmrb\") pod \"telemetry-operator-controller-manager-58d5ff84df-d77ht\" (UID: \"a571f5fa-fa44-48fd-b675-f0b42607ac7d\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.588333 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert\") pod \"infra-operator-controller-manager-6797f5b887-q9vgk\" (UID: \"53a860ae-4169-4f47-8ba7-032c96b4be3a\") " pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.588395 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp74k\" (UniqueName: \"kubernetes.io/projected/105e535b-6aee-4187-9008-65a41e6e3572-kube-api-access-jp74k\") pod \"test-operator-controller-manager-5854674fcc-9sw8b\" (UID: \"105e535b-6aee-4187-9008-65a41e6e3572\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b" Dec 11 18:14:49 crc kubenswrapper[4877]: E1211 18:14:49.588892 4877 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 18:14:49 crc kubenswrapper[4877]: E1211 18:14:49.588943 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert podName:53a860ae-4169-4f47-8ba7-032c96b4be3a nodeName:}" failed. No retries permitted until 2025-12-11 18:14:50.588928446 +0000 UTC m=+851.615172490 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert") pod "infra-operator-controller-manager-6797f5b887-q9vgk" (UID: "53a860ae-4169-4f47-8ba7-032c96b4be3a") : secret "infra-operator-webhook-server-cert" not found Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.592617 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.593728 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.596516 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.597265 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-8m56x" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.599053 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.600941 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.615594 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7cg2z"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.624656 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7cg2z"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.624799 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7cg2z" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.629542 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-b86fb" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.694893 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfmrb\" (UniqueName: \"kubernetes.io/projected/a571f5fa-fa44-48fd-b675-f0b42607ac7d-kube-api-access-kfmrb\") pod \"telemetry-operator-controller-manager-58d5ff84df-d77ht\" (UID: \"a571f5fa-fa44-48fd-b675-f0b42607ac7d\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.694964 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpd26\" (UniqueName: \"kubernetes.io/projected/15a74efd-e36a-4946-a9e5-2453c98355aa-kube-api-access-cpd26\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7cg2z\" (UID: \"15a74efd-e36a-4946-a9e5-2453c98355aa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7cg2z" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.695020 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcs22\" (UniqueName: \"kubernetes.io/projected/c21c4469-97a3-47c7-bced-d7d18aa1008a-kube-api-access-jcs22\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.695082 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.695109 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.695141 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c5mh\" (UniqueName: \"kubernetes.io/projected/35acba78-7a75-40d7-b5fb-43d595c3bc1f-kube-api-access-7c5mh\") pod \"watcher-operator-controller-manager-75944c9b7-zm6ts\" (UID: \"35acba78-7a75-40d7-b5fb-43d595c3bc1f\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-zm6ts" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.695345 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp74k\" (UniqueName: \"kubernetes.io/projected/105e535b-6aee-4187-9008-65a41e6e3572-kube-api-access-jp74k\") pod \"test-operator-controller-manager-5854674fcc-9sw8b\" (UID: \"105e535b-6aee-4187-9008-65a41e6e3572\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.738011 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfmrb\" (UniqueName: \"kubernetes.io/projected/a571f5fa-fa44-48fd-b675-f0b42607ac7d-kube-api-access-kfmrb\") pod \"telemetry-operator-controller-manager-58d5ff84df-d77ht\" (UID: \"a571f5fa-fa44-48fd-b675-f0b42607ac7d\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.742066 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp74k\" (UniqueName: \"kubernetes.io/projected/105e535b-6aee-4187-9008-65a41e6e3572-kube-api-access-jp74k\") pod \"test-operator-controller-manager-5854674fcc-9sw8b\" (UID: \"105e535b-6aee-4187-9008-65a41e6e3572\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.796914 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c5mh\" (UniqueName: \"kubernetes.io/projected/35acba78-7a75-40d7-b5fb-43d595c3bc1f-kube-api-access-7c5mh\") pod \"watcher-operator-controller-manager-75944c9b7-zm6ts\" (UID: \"35acba78-7a75-40d7-b5fb-43d595c3bc1f\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-zm6ts" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.797014 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f4zhrq\" (UID: \"e525cb88-4985-4374-a7f8-185c016e4a14\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.797069 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpd26\" (UniqueName: \"kubernetes.io/projected/15a74efd-e36a-4946-a9e5-2453c98355aa-kube-api-access-cpd26\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7cg2z\" (UID: \"15a74efd-e36a-4946-a9e5-2453c98355aa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7cg2z" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.797117 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcs22\" (UniqueName: \"kubernetes.io/projected/c21c4469-97a3-47c7-bced-d7d18aa1008a-kube-api-access-jcs22\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.797172 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.797197 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:49 crc kubenswrapper[4877]: E1211 18:14:49.797733 4877 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 18:14:49 crc kubenswrapper[4877]: E1211 18:14:49.797817 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert podName:e525cb88-4985-4374-a7f8-185c016e4a14 nodeName:}" failed. No retries permitted until 2025-12-11 18:14:50.797794899 +0000 UTC m=+851.824039113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f4zhrq" (UID: "e525cb88-4985-4374-a7f8-185c016e4a14") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 18:14:49 crc kubenswrapper[4877]: E1211 18:14:49.797917 4877 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 18:14:49 crc kubenswrapper[4877]: E1211 18:14:49.797967 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs podName:c21c4469-97a3-47c7-bced-d7d18aa1008a nodeName:}" failed. No retries permitted until 2025-12-11 18:14:50.297951613 +0000 UTC m=+851.324195657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs") pod "openstack-operator-controller-manager-545595b497-h5vf4" (UID: "c21c4469-97a3-47c7-bced-d7d18aa1008a") : secret "webhook-server-cert" not found Dec 11 18:14:49 crc kubenswrapper[4877]: E1211 18:14:49.798016 4877 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 18:14:49 crc kubenswrapper[4877]: E1211 18:14:49.798052 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs podName:c21c4469-97a3-47c7-bced-d7d18aa1008a nodeName:}" failed. No retries permitted until 2025-12-11 18:14:50.298038705 +0000 UTC m=+851.324282749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs") pod "openstack-operator-controller-manager-545595b497-h5vf4" (UID: "c21c4469-97a3-47c7-bced-d7d18aa1008a") : secret "metrics-server-cert" not found Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.820121 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpd26\" (UniqueName: \"kubernetes.io/projected/15a74efd-e36a-4946-a9e5-2453c98355aa-kube-api-access-cpd26\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7cg2z\" (UID: \"15a74efd-e36a-4946-a9e5-2453c98355aa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7cg2z" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.822084 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c5mh\" (UniqueName: \"kubernetes.io/projected/35acba78-7a75-40d7-b5fb-43d595c3bc1f-kube-api-access-7c5mh\") pod \"watcher-operator-controller-manager-75944c9b7-zm6ts\" (UID: \"35acba78-7a75-40d7-b5fb-43d595c3bc1f\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-zm6ts" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.832483 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcs22\" (UniqueName: \"kubernetes.io/projected/c21c4469-97a3-47c7-bced-d7d18aa1008a-kube-api-access-jcs22\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.860259 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.879645 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-h429c"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.894157 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2"] Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.915108 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.915676 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-zm6ts" Dec 11 18:14:49 crc kubenswrapper[4877]: I1211 18:14:49.960994 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7cg2z" Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.004901 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-nf9dm"] Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.023759 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg"] Dec 11 18:14:50 crc kubenswrapper[4877]: W1211 18:14:50.035670 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3670e0c0_f188_4f22_8097_52f0a00b3a47.slice/crio-fe1db1a5cdd808d10c09309e9a2e124031af7fa14b6dd969aad881ece16c3c87 WatchSource:0}: Error finding container fe1db1a5cdd808d10c09309e9a2e124031af7fa14b6dd969aad881ece16c3c87: Status 404 returned error can't find the container with id fe1db1a5cdd808d10c09309e9a2e124031af7fa14b6dd969aad881ece16c3c87 Dec 11 18:14:50 crc kubenswrapper[4877]: W1211 18:14:50.104967 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd25a8fc_0f52_4795_8a65_debdfdf452b3.slice/crio-763f015f64f011cafb40d1b13c5e502aae28bf85e4f823fe88a40caf04572c1f WatchSource:0}: Error finding container 763f015f64f011cafb40d1b13c5e502aae28bf85e4f823fe88a40caf04572c1f: Status 404 returned error can't find the container with id 763f015f64f011cafb40d1b13c5e502aae28bf85e4f823fe88a40caf04572c1f Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.151152 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xtk9z"] Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.235152 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd"] Dec 11 18:14:50 crc kubenswrapper[4877]: W1211 18:14:50.247570 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fb4fbf5_e490_43ae_b7c5_8a2e481f7209.slice/crio-9e9bac48ed781a046402b279a8d11c9d6ce3f12761cc3a88ff74ea70d16cc437 WatchSource:0}: Error finding container 9e9bac48ed781a046402b279a8d11c9d6ce3f12761cc3a88ff74ea70d16cc437: Status 404 returned error can't find the container with id 9e9bac48ed781a046402b279a8d11c9d6ce3f12761cc3a88ff74ea70d16cc437 Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.306649 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.306705 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.307804 4877 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.307870 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs podName:c21c4469-97a3-47c7-bced-d7d18aa1008a nodeName:}" failed. No retries permitted until 2025-12-11 18:14:51.307846221 +0000 UTC m=+852.334090265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs") pod "openstack-operator-controller-manager-545595b497-h5vf4" (UID: "c21c4469-97a3-47c7-bced-d7d18aa1008a") : secret "metrics-server-cert" not found Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.308831 4877 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.308873 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs podName:c21c4469-97a3-47c7-bced-d7d18aa1008a nodeName:}" failed. No retries permitted until 2025-12-11 18:14:51.308862555 +0000 UTC m=+852.335106599 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs") pod "openstack-operator-controller-manager-545595b497-h5vf4" (UID: "c21c4469-97a3-47c7-bced-d7d18aa1008a") : secret "webhook-server-cert" not found Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.380744 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5"] Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.390402 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8"] Dec 11 18:14:50 crc kubenswrapper[4877]: W1211 18:14:50.400240 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0378195_6809_4f5c_b9f3_a37177789ee5.slice/crio-ba05244074ade93b7c42664b940d76627828f2e749014778f1700afbfc120620 WatchSource:0}: Error finding container ba05244074ade93b7c42664b940d76627828f2e749014778f1700afbfc120620: Status 404 returned error can't find the container with id ba05244074ade93b7c42664b940d76627828f2e749014778f1700afbfc120620 Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.402982 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq"] Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.415399 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m"] Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.422534 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q"] Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.583809 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-wldgs"] Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.602481 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl"] Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.613332 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert\") pod \"infra-operator-controller-manager-6797f5b887-q9vgk\" (UID: \"53a860ae-4169-4f47-8ba7-032c96b4be3a\") " pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.613607 4877 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.613673 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert podName:53a860ae-4169-4f47-8ba7-032c96b4be3a nodeName:}" failed. No retries permitted until 2025-12-11 18:14:52.613656488 +0000 UTC m=+853.639900532 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert") pod "infra-operator-controller-manager-6797f5b887-q9vgk" (UID: "53a860ae-4169-4f47-8ba7-032c96b4be3a") : secret "infra-operator-webhook-server-cert" not found Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.630987 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6pcrt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-dmlvl_openstack-operators(ca3c6f4e-3491-4109-bf60-f4efbad58bc1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.634097 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6pcrt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-dmlvl_openstack-operators(ca3c6f4e-3491-4109-bf60-f4efbad58bc1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.636153 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl" podUID="ca3c6f4e-3491-4109-bf60-f4efbad58bc1" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.641104 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c9h57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-8vf65_openstack-operators(099a6c32-cea0-4cea-b763-f60ba3e867e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.642633 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d59jc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-p97xh_openstack-operators(efcf4499-fc58-4b4c-b047-c397b6154e38): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.646626 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c9h57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-8vf65_openstack-operators(099a6c32-cea0-4cea-b763-f60ba3e867e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.647836 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65" podUID="099a6c32-cea0-4cea-b763-f60ba3e867e7" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.648753 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d59jc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-p97xh_openstack-operators(efcf4499-fc58-4b4c-b047-c397b6154e38): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.648899 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6"] Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.650004 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-p97xh" podUID="efcf4499-fc58-4b4c-b047-c397b6154e38" Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.659977 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65"] Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.664624 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-p97xh"] Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.703825 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg" event={"ID":"fd25a8fc-0f52-4795-8a65-debdfdf452b3","Type":"ContainerStarted","Data":"763f015f64f011cafb40d1b13c5e502aae28bf85e4f823fe88a40caf04572c1f"} Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.705922 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd" event={"ID":"2fb4fbf5-e490-43ae-b7c5-8a2e481f7209","Type":"ContainerStarted","Data":"9e9bac48ed781a046402b279a8d11c9d6ce3f12761cc3a88ff74ea70d16cc437"} Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.706892 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-p97xh" event={"ID":"efcf4499-fc58-4b4c-b047-c397b6154e38","Type":"ContainerStarted","Data":"d2e7fe6abba06451c1d3179096a4a4f8cb96f9df201e93e5674dbda6d021458c"} Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.709163 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq" event={"ID":"6dcd317e-41f9-45e8-bd14-77d9f4ae25dd","Type":"ContainerStarted","Data":"618bec56ae511a315c3d4ebf0c78b74945c9381902843b337f50f28da9e2efd7"} Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.710770 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-p97xh" podUID="efcf4499-fc58-4b4c-b047-c397b6154e38" Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.711083 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q" event={"ID":"1ebbc540-13e3-4fee-a9b7-10bb95da50b9","Type":"ContainerStarted","Data":"ec203939e05284c78057b1adb0b9730b908d92b151cf295b0558e3d59b58069c"} Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.714436 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65" event={"ID":"099a6c32-cea0-4cea-b763-f60ba3e867e7","Type":"ContainerStarted","Data":"af223b15c339266ade2cf576e6fca6a7189bd33dcec48b2f5b0d5f14c74788f5"} Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.715933 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-nf9dm" event={"ID":"3670e0c0-f188-4f22-8097-52f0a00b3a47","Type":"ContainerStarted","Data":"fe1db1a5cdd808d10c09309e9a2e124031af7fa14b6dd969aad881ece16c3c87"} Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.717796 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8" event={"ID":"e0378195-6809-4f5c-b9f3-a37177789ee5","Type":"ContainerStarted","Data":"ba05244074ade93b7c42664b940d76627828f2e749014778f1700afbfc120620"} Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.721894 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65" podUID="099a6c32-cea0-4cea-b763-f60ba3e867e7" Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.724418 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2" event={"ID":"105c0e29-3d26-49ee-83f1-9ac47ec17cfd","Type":"ContainerStarted","Data":"980644a9abe2ab2decbcb10c8e960055b7d7bef28a2c3cf53473fb14c2b761d2"} Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.738484 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m" event={"ID":"c3dd6849-836b-462c-abbc-d97418287658","Type":"ContainerStarted","Data":"5794869a3ff120fe73ebd6490de12c8e478e6064e77a1864b306fbef536a6d0c"} Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.742744 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-h429c" event={"ID":"444a3f0d-8828-4958-9d25-61f4251d74c4","Type":"ContainerStarted","Data":"9543c6b875f16ae936e3cfe9cda1319e1b957c4539a216eb3012e846de95be08"} Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.745764 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" event={"ID":"243bfab6-eced-4740-87ce-ab61441881f5","Type":"ContainerStarted","Data":"731f2b8862a81d8da672879920182f02c84ec7becae7ef7e09fc7dfca5b3c681"} Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.748813 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wldgs" event={"ID":"8675dcf8-097e-4927-aa50-827f3034af41","Type":"ContainerStarted","Data":"c9d90ada668ebf7f66af236ae36a0da6b6737f560424df20c11bf4d399b4eed8"} Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.750170 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5" event={"ID":"f7fa49af-7b01-4972-aec4-5b2b42dee85f","Type":"ContainerStarted","Data":"fae9cb2ed91dad3fe1ef78adf0e7f84d98a19260f69c35f36fdef8ca6e6650fc"} Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.752391 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7cg2z"] Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.752532 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl" event={"ID":"ca3c6f4e-3491-4109-bf60-f4efbad58bc1","Type":"ContainerStarted","Data":"853fa90af538e575a817623f71e52f7d21e79f9a082045fa2551e2549fc7c00e"} Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.755596 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xtk9z" event={"ID":"18dda364-66d0-47d3-8c03-4b0ecb73a634","Type":"ContainerStarted","Data":"12bacb8c8f681215080e795131b93eb047ebb874d645bb4c61c2af6f691d2869"} Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.756055 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl" podUID="ca3c6f4e-3491-4109-bf60-f4efbad58bc1" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.760251 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cpd26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7cg2z_openstack-operators(15a74efd-e36a-4946-a9e5-2453c98355aa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.761447 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7cg2z" podUID="15a74efd-e36a-4946-a9e5-2453c98355aa" Dec 11 18:14:50 crc kubenswrapper[4877]: W1211 18:14:50.765864 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod105e535b_6aee_4187_9008_65a41e6e3572.slice/crio-ce3c5d9cad60b48af0d8311a2aa97495a8cecd407313e0c275289d1c9587fbbf WatchSource:0}: Error finding container ce3c5d9cad60b48af0d8311a2aa97495a8cecd407313e0c275289d1c9587fbbf: Status 404 returned error can't find the container with id ce3c5d9cad60b48af0d8311a2aa97495a8cecd407313e0c275289d1c9587fbbf Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.771675 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b"] Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.785655 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-zm6ts"] Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.787936 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jp74k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-9sw8b_openstack-operators(105e535b-6aee-4187-9008-65a41e6e3572): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.790360 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jp74k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-9sw8b_openstack-operators(105e535b-6aee-4187-9008-65a41e6e3572): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.791629 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b" podUID="105e535b-6aee-4187-9008-65a41e6e3572" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.798947 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kfmrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-d77ht_openstack-operators(a571f5fa-fa44-48fd-b675-f0b42607ac7d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.801877 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kfmrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-d77ht_openstack-operators(a571f5fa-fa44-48fd-b675-f0b42607ac7d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.803899 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht" podUID="a571f5fa-fa44-48fd-b675-f0b42607ac7d" Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.816334 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f4zhrq\" (UID: \"e525cb88-4985-4374-a7f8-185c016e4a14\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.816504 4877 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 18:14:50 crc kubenswrapper[4877]: E1211 18:14:50.816660 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert podName:e525cb88-4985-4374-a7f8-185c016e4a14 nodeName:}" failed. No retries permitted until 2025-12-11 18:14:52.816629942 +0000 UTC m=+853.842873986 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f4zhrq" (UID: "e525cb88-4985-4374-a7f8-185c016e4a14") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 18:14:50 crc kubenswrapper[4877]: I1211 18:14:50.821284 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht"] Dec 11 18:14:51 crc kubenswrapper[4877]: I1211 18:14:51.331322 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:51 crc kubenswrapper[4877]: I1211 18:14:51.331972 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:51 crc kubenswrapper[4877]: E1211 18:14:51.331589 4877 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 18:14:51 crc kubenswrapper[4877]: E1211 18:14:51.332305 4877 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 18:14:51 crc kubenswrapper[4877]: E1211 18:14:51.332211 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs podName:c21c4469-97a3-47c7-bced-d7d18aa1008a nodeName:}" failed. No retries permitted until 2025-12-11 18:14:53.332176715 +0000 UTC m=+854.358420779 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs") pod "openstack-operator-controller-manager-545595b497-h5vf4" (UID: "c21c4469-97a3-47c7-bced-d7d18aa1008a") : secret "webhook-server-cert" not found Dec 11 18:14:51 crc kubenswrapper[4877]: E1211 18:14:51.332572 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs podName:c21c4469-97a3-47c7-bced-d7d18aa1008a nodeName:}" failed. No retries permitted until 2025-12-11 18:14:53.332561944 +0000 UTC m=+854.358805988 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs") pod "openstack-operator-controller-manager-545595b497-h5vf4" (UID: "c21c4469-97a3-47c7-bced-d7d18aa1008a") : secret "metrics-server-cert" not found Dec 11 18:14:51 crc kubenswrapper[4877]: E1211 18:14:51.813341 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7cg2z" podUID="15a74efd-e36a-4946-a9e5-2453c98355aa" Dec 11 18:14:51 crc kubenswrapper[4877]: I1211 18:14:51.818409 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7cg2z" event={"ID":"15a74efd-e36a-4946-a9e5-2453c98355aa","Type":"ContainerStarted","Data":"e06ea2cfc19b6f259bf92d603f69836f5dc65ac579888d12aff55ffd36f42297"} Dec 11 18:14:51 crc kubenswrapper[4877]: I1211 18:14:51.833910 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht" event={"ID":"a571f5fa-fa44-48fd-b675-f0b42607ac7d","Type":"ContainerStarted","Data":"600553b19cb14650a3d400d2abe90c0597c635577b7a9706b51f3332e193feaa"} Dec 11 18:14:51 crc kubenswrapper[4877]: E1211 18:14:51.846599 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht" podUID="a571f5fa-fa44-48fd-b675-f0b42607ac7d" Dec 11 18:14:51 crc kubenswrapper[4877]: I1211 18:14:51.858718 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b" event={"ID":"105e535b-6aee-4187-9008-65a41e6e3572","Type":"ContainerStarted","Data":"ce3c5d9cad60b48af0d8311a2aa97495a8cecd407313e0c275289d1c9587fbbf"} Dec 11 18:14:51 crc kubenswrapper[4877]: E1211 18:14:51.872652 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b" podUID="105e535b-6aee-4187-9008-65a41e6e3572" Dec 11 18:14:51 crc kubenswrapper[4877]: I1211 18:14:51.883585 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-zm6ts" event={"ID":"35acba78-7a75-40d7-b5fb-43d595c3bc1f","Type":"ContainerStarted","Data":"31bf9c341c8acd447de928c2b334608fd4b7f3a1a651c6e2337b9de17623584d"} Dec 11 18:14:51 crc kubenswrapper[4877]: E1211 18:14:51.892822 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65" podUID="099a6c32-cea0-4cea-b763-f60ba3e867e7" Dec 11 18:14:51 crc kubenswrapper[4877]: E1211 18:14:51.892978 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl" podUID="ca3c6f4e-3491-4109-bf60-f4efbad58bc1" Dec 11 18:14:51 crc kubenswrapper[4877]: E1211 18:14:51.918719 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-p97xh" podUID="efcf4499-fc58-4b4c-b047-c397b6154e38" Dec 11 18:14:52 crc kubenswrapper[4877]: E1211 18:14:52.699302 4877 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 18:14:52 crc kubenswrapper[4877]: E1211 18:14:52.699655 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert podName:53a860ae-4169-4f47-8ba7-032c96b4be3a nodeName:}" failed. No retries permitted until 2025-12-11 18:14:56.699637926 +0000 UTC m=+857.725881970 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert") pod "infra-operator-controller-manager-6797f5b887-q9vgk" (UID: "53a860ae-4169-4f47-8ba7-032c96b4be3a") : secret "infra-operator-webhook-server-cert" not found Dec 11 18:14:52 crc kubenswrapper[4877]: I1211 18:14:52.699160 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert\") pod \"infra-operator-controller-manager-6797f5b887-q9vgk\" (UID: \"53a860ae-4169-4f47-8ba7-032c96b4be3a\") " pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:14:52 crc kubenswrapper[4877]: I1211 18:14:52.902966 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f4zhrq\" (UID: \"e525cb88-4985-4374-a7f8-185c016e4a14\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:14:52 crc kubenswrapper[4877]: E1211 18:14:52.903329 4877 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 18:14:52 crc kubenswrapper[4877]: E1211 18:14:52.903409 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert podName:e525cb88-4985-4374-a7f8-185c016e4a14 nodeName:}" failed. No retries permitted until 2025-12-11 18:14:56.903387628 +0000 UTC m=+857.929631672 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f4zhrq" (UID: "e525cb88-4985-4374-a7f8-185c016e4a14") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 18:14:52 crc kubenswrapper[4877]: E1211 18:14:52.906329 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7cg2z" podUID="15a74efd-e36a-4946-a9e5-2453c98355aa" Dec 11 18:14:52 crc kubenswrapper[4877]: E1211 18:14:52.906500 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b" podUID="105e535b-6aee-4187-9008-65a41e6e3572" Dec 11 18:14:52 crc kubenswrapper[4877]: E1211 18:14:52.907151 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht" podUID="a571f5fa-fa44-48fd-b675-f0b42607ac7d" Dec 11 18:14:53 crc kubenswrapper[4877]: I1211 18:14:53.424328 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:53 crc kubenswrapper[4877]: I1211 18:14:53.424412 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:53 crc kubenswrapper[4877]: E1211 18:14:53.424549 4877 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 18:14:53 crc kubenswrapper[4877]: E1211 18:14:53.424574 4877 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 18:14:53 crc kubenswrapper[4877]: E1211 18:14:53.424613 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs podName:c21c4469-97a3-47c7-bced-d7d18aa1008a nodeName:}" failed. No retries permitted until 2025-12-11 18:14:57.424595925 +0000 UTC m=+858.450839969 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs") pod "openstack-operator-controller-manager-545595b497-h5vf4" (UID: "c21c4469-97a3-47c7-bced-d7d18aa1008a") : secret "metrics-server-cert" not found Dec 11 18:14:53 crc kubenswrapper[4877]: E1211 18:14:53.424680 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs podName:c21c4469-97a3-47c7-bced-d7d18aa1008a nodeName:}" failed. No retries permitted until 2025-12-11 18:14:57.424654756 +0000 UTC m=+858.450898970 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs") pod "openstack-operator-controller-manager-545595b497-h5vf4" (UID: "c21c4469-97a3-47c7-bced-d7d18aa1008a") : secret "webhook-server-cert" not found Dec 11 18:14:56 crc kubenswrapper[4877]: I1211 18:14:56.796961 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert\") pod \"infra-operator-controller-manager-6797f5b887-q9vgk\" (UID: \"53a860ae-4169-4f47-8ba7-032c96b4be3a\") " pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:14:56 crc kubenswrapper[4877]: E1211 18:14:56.797276 4877 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 18:14:56 crc kubenswrapper[4877]: E1211 18:14:56.797653 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert podName:53a860ae-4169-4f47-8ba7-032c96b4be3a nodeName:}" failed. No retries permitted until 2025-12-11 18:15:04.797625996 +0000 UTC m=+865.823870040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert") pod "infra-operator-controller-manager-6797f5b887-q9vgk" (UID: "53a860ae-4169-4f47-8ba7-032c96b4be3a") : secret "infra-operator-webhook-server-cert" not found Dec 11 18:14:57 crc kubenswrapper[4877]: I1211 18:14:57.001409 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f4zhrq\" (UID: \"e525cb88-4985-4374-a7f8-185c016e4a14\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:14:57 crc kubenswrapper[4877]: E1211 18:14:57.001762 4877 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 18:14:57 crc kubenswrapper[4877]: E1211 18:14:57.001845 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert podName:e525cb88-4985-4374-a7f8-185c016e4a14 nodeName:}" failed. No retries permitted until 2025-12-11 18:15:05.001821573 +0000 UTC m=+866.028065637 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f4zhrq" (UID: "e525cb88-4985-4374-a7f8-185c016e4a14") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 18:14:57 crc kubenswrapper[4877]: I1211 18:14:57.432570 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:57 crc kubenswrapper[4877]: I1211 18:14:57.432630 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:14:57 crc kubenswrapper[4877]: E1211 18:14:57.432832 4877 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 18:14:57 crc kubenswrapper[4877]: E1211 18:14:57.432937 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs podName:c21c4469-97a3-47c7-bced-d7d18aa1008a nodeName:}" failed. No retries permitted until 2025-12-11 18:15:05.432919179 +0000 UTC m=+866.459163223 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs") pod "openstack-operator-controller-manager-545595b497-h5vf4" (UID: "c21c4469-97a3-47c7-bced-d7d18aa1008a") : secret "metrics-server-cert" not found Dec 11 18:14:57 crc kubenswrapper[4877]: E1211 18:14:57.433259 4877 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 18:14:57 crc kubenswrapper[4877]: E1211 18:14:57.433404 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs podName:c21c4469-97a3-47c7-bced-d7d18aa1008a nodeName:}" failed. No retries permitted until 2025-12-11 18:15:05.433350331 +0000 UTC m=+866.459594375 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs") pod "openstack-operator-controller-manager-545595b497-h5vf4" (UID: "c21c4469-97a3-47c7-bced-d7d18aa1008a") : secret "webhook-server-cert" not found Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.156158 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs"] Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.157870 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.160058 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.160393 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.169731 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs"] Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.294196 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4de7ea9a-363d-43ef-82ee-39d39cd1261e-secret-volume\") pod \"collect-profiles-29424615-65zjs\" (UID: \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.294501 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4de7ea9a-363d-43ef-82ee-39d39cd1261e-config-volume\") pod \"collect-profiles-29424615-65zjs\" (UID: \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.294803 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw5cd\" (UniqueName: \"kubernetes.io/projected/4de7ea9a-363d-43ef-82ee-39d39cd1261e-kube-api-access-dw5cd\") pod \"collect-profiles-29424615-65zjs\" (UID: \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.396299 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4de7ea9a-363d-43ef-82ee-39d39cd1261e-config-volume\") pod \"collect-profiles-29424615-65zjs\" (UID: \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.396397 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw5cd\" (UniqueName: \"kubernetes.io/projected/4de7ea9a-363d-43ef-82ee-39d39cd1261e-kube-api-access-dw5cd\") pod \"collect-profiles-29424615-65zjs\" (UID: \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.396478 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4de7ea9a-363d-43ef-82ee-39d39cd1261e-secret-volume\") pod \"collect-profiles-29424615-65zjs\" (UID: \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.397690 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4de7ea9a-363d-43ef-82ee-39d39cd1261e-config-volume\") pod \"collect-profiles-29424615-65zjs\" (UID: \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.408426 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4de7ea9a-363d-43ef-82ee-39d39cd1261e-secret-volume\") pod \"collect-profiles-29424615-65zjs\" (UID: \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.422996 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw5cd\" (UniqueName: \"kubernetes.io/projected/4de7ea9a-363d-43ef-82ee-39d39cd1261e-kube-api-access-dw5cd\") pod \"collect-profiles-29424615-65zjs\" (UID: \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" Dec 11 18:15:00 crc kubenswrapper[4877]: I1211 18:15:00.477862 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" Dec 11 18:15:04 crc kubenswrapper[4877]: E1211 18:15:04.855460 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 11 18:15:04 crc kubenswrapper[4877]: E1211 18:15:04.856559 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mp66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-tlsrq_openstack-operators(6dcd317e-41f9-45e8-bd14-77d9f4ae25dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:15:04 crc kubenswrapper[4877]: I1211 18:15:04.878923 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert\") pod \"infra-operator-controller-manager-6797f5b887-q9vgk\" (UID: \"53a860ae-4169-4f47-8ba7-032c96b4be3a\") " pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:15:04 crc kubenswrapper[4877]: E1211 18:15:04.879615 4877 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 18:15:04 crc kubenswrapper[4877]: E1211 18:15:04.881641 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert podName:53a860ae-4169-4f47-8ba7-032c96b4be3a nodeName:}" failed. No retries permitted until 2025-12-11 18:15:20.881614078 +0000 UTC m=+881.907858122 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert") pod "infra-operator-controller-manager-6797f5b887-q9vgk" (UID: "53a860ae-4169-4f47-8ba7-032c96b4be3a") : secret "infra-operator-webhook-server-cert" not found Dec 11 18:15:05 crc kubenswrapper[4877]: I1211 18:15:05.084148 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f4zhrq\" (UID: \"e525cb88-4985-4374-a7f8-185c016e4a14\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:15:05 crc kubenswrapper[4877]: E1211 18:15:05.084725 4877 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 18:15:05 crc kubenswrapper[4877]: E1211 18:15:05.084950 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert podName:e525cb88-4985-4374-a7f8-185c016e4a14 nodeName:}" failed. No retries permitted until 2025-12-11 18:15:21.084934092 +0000 UTC m=+882.111178136 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f4zhrq" (UID: "e525cb88-4985-4374-a7f8-185c016e4a14") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 18:15:05 crc kubenswrapper[4877]: I1211 18:15:05.409069 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs"] Dec 11 18:15:05 crc kubenswrapper[4877]: I1211 18:15:05.492700 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:15:05 crc kubenswrapper[4877]: I1211 18:15:05.492742 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:15:05 crc kubenswrapper[4877]: E1211 18:15:05.492919 4877 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 18:15:05 crc kubenswrapper[4877]: E1211 18:15:05.492972 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs podName:c21c4469-97a3-47c7-bced-d7d18aa1008a nodeName:}" failed. No retries permitted until 2025-12-11 18:15:21.492955577 +0000 UTC m=+882.519199611 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs") pod "openstack-operator-controller-manager-545595b497-h5vf4" (UID: "c21c4469-97a3-47c7-bced-d7d18aa1008a") : secret "metrics-server-cert" not found Dec 11 18:15:05 crc kubenswrapper[4877]: E1211 18:15:05.493299 4877 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 18:15:05 crc kubenswrapper[4877]: E1211 18:15:05.493320 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs podName:c21c4469-97a3-47c7-bced-d7d18aa1008a nodeName:}" failed. No retries permitted until 2025-12-11 18:15:21.493313667 +0000 UTC m=+882.519557711 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs") pod "openstack-operator-controller-manager-545595b497-h5vf4" (UID: "c21c4469-97a3-47c7-bced-d7d18aa1008a") : secret "webhook-server-cert" not found Dec 11 18:15:05 crc kubenswrapper[4877]: E1211 18:15:05.589747 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dkvvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-q25n5_openstack-operators(f7fa49af-7b01-4972-aec4-5b2b42dee85f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 18:15:05 crc kubenswrapper[4877]: E1211 18:15:05.589866 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-swhpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-697fb699cf-hsvv2_openstack-operators(105c0e29-3d26-49ee-83f1-9ac47ec17cfd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 18:15:05 crc kubenswrapper[4877]: E1211 18:15:05.590986 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2" podUID="105c0e29-3d26-49ee-83f1-9ac47ec17cfd" Dec 11 18:15:05 crc kubenswrapper[4877]: E1211 18:15:05.591040 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5" podUID="f7fa49af-7b01-4972-aec4-5b2b42dee85f" Dec 11 18:15:05 crc kubenswrapper[4877]: E1211 18:15:05.604662 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qqj7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-967d97867-mcjpd_openstack-operators(2fb4fbf5-e490-43ae-b7c5-8a2e481f7209): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 18:15:05 crc kubenswrapper[4877]: E1211 18:15:05.605931 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd" podUID="2fb4fbf5-e490-43ae-b7c5-8a2e481f7209" Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.051333 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2" event={"ID":"105c0e29-3d26-49ee-83f1-9ac47ec17cfd","Type":"ContainerStarted","Data":"369c3bf9b164ef07ff741f3b539e49f110d0f80cbe2f40179c1a6dc2d8dfea52"} Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.051428 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2" Dec 11 18:15:06 crc kubenswrapper[4877]: E1211 18:15:06.058541 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2" podUID="105c0e29-3d26-49ee-83f1-9ac47ec17cfd" Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.064087 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xtk9z" event={"ID":"18dda364-66d0-47d3-8c03-4b0ecb73a634","Type":"ContainerStarted","Data":"9717626bd518a663e9a2b0f3051aa37324afe261b6b7011fb30dbca3d0c177bf"} Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.081838 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" event={"ID":"4de7ea9a-363d-43ef-82ee-39d39cd1261e","Type":"ContainerStarted","Data":"2c9f87cc10cff146b2b99b785816b59d105aa588dabf57b51cac2649b90e7c2a"} Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.081915 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" event={"ID":"4de7ea9a-363d-43ef-82ee-39d39cd1261e","Type":"ContainerStarted","Data":"8081af60c0c537659d64a753ce304d40a6d650cc55de0b907811700752cc78fc"} Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.117829 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q" event={"ID":"1ebbc540-13e3-4fee-a9b7-10bb95da50b9","Type":"ContainerStarted","Data":"bfc5e2064dbd8abe396920bdcdc32c27a23d048e38c037466ac9c7d811419653"} Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.142660 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd" event={"ID":"2fb4fbf5-e490-43ae-b7c5-8a2e481f7209","Type":"ContainerStarted","Data":"35d8c819d8cf5034cdfe16f8a9ed5f5aac8715a692ecaf37b509482b3b148380"} Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.143813 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd" Dec 11 18:15:06 crc kubenswrapper[4877]: E1211 18:15:06.149402 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd" podUID="2fb4fbf5-e490-43ae-b7c5-8a2e481f7209" Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.178778 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wldgs" event={"ID":"8675dcf8-097e-4927-aa50-827f3034af41","Type":"ContainerStarted","Data":"4a8a7782c7f040318d2426d43e1780b3754a7e7c02dd749e8a2fbdff958e9393"} Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.192750 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" podStartSLOduration=6.1927216959999996 podStartE2EDuration="6.192721696s" podCreationTimestamp="2025-12-11 18:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:15:06.123023319 +0000 UTC m=+867.149267363" watchObservedRunningTime="2025-12-11 18:15:06.192721696 +0000 UTC m=+867.218965740" Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.215474 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8" event={"ID":"e0378195-6809-4f5c-b9f3-a37177789ee5","Type":"ContainerStarted","Data":"4be14e0ad56279003f750d1b007028fef736280cea6f86a37802bdef62f80924"} Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.222541 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg" event={"ID":"fd25a8fc-0f52-4795-8a65-debdfdf452b3","Type":"ContainerStarted","Data":"e1c25017073f5727d7797222c7b4930551a36473469c8d751e71b9d96036bae4"} Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.232872 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" event={"ID":"243bfab6-eced-4740-87ce-ab61441881f5","Type":"ContainerStarted","Data":"1a283381441637a06595ffe456a91e6d93b037fa573a91a3b646cb296af4d8b4"} Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.236984 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m" event={"ID":"c3dd6849-836b-462c-abbc-d97418287658","Type":"ContainerStarted","Data":"04c48a5123b0b88cf7824a4328196186337bf27835ae979a541ba8507edea0a5"} Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.249411 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-h429c" event={"ID":"444a3f0d-8828-4958-9d25-61f4251d74c4","Type":"ContainerStarted","Data":"ce0e8ec56d609acc0e3e585f3a0175f7959de7ef39568930390ab724f7c76728"} Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.254065 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-nf9dm" event={"ID":"3670e0c0-f188-4f22-8097-52f0a00b3a47","Type":"ContainerStarted","Data":"a8b2c945a044518e5243439c8e3bee32b61c991c21115267f41aeb85eaf7b739"} Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.277914 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5" event={"ID":"f7fa49af-7b01-4972-aec4-5b2b42dee85f","Type":"ContainerStarted","Data":"30da585438d5e91d7d418f23652fe17ee1de37cd3f9847e4480d77001140fd4c"} Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.278981 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5" Dec 11 18:15:06 crc kubenswrapper[4877]: E1211 18:15:06.291945 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5" podUID="f7fa49af-7b01-4972-aec4-5b2b42dee85f" Dec 11 18:15:06 crc kubenswrapper[4877]: I1211 18:15:06.306570 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-zm6ts" event={"ID":"35acba78-7a75-40d7-b5fb-43d595c3bc1f","Type":"ContainerStarted","Data":"e35b5ca50921b9041dab2bb34de5cf516f6bfecb2021300cd295dc2f21bc3f13"} Dec 11 18:15:07 crc kubenswrapper[4877]: I1211 18:15:07.322691 4877 generic.go:334] "Generic (PLEG): container finished" podID="4de7ea9a-363d-43ef-82ee-39d39cd1261e" containerID="2c9f87cc10cff146b2b99b785816b59d105aa588dabf57b51cac2649b90e7c2a" exitCode=0 Dec 11 18:15:07 crc kubenswrapper[4877]: I1211 18:15:07.323045 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" event={"ID":"4de7ea9a-363d-43ef-82ee-39d39cd1261e","Type":"ContainerDied","Data":"2c9f87cc10cff146b2b99b785816b59d105aa588dabf57b51cac2649b90e7c2a"} Dec 11 18:15:07 crc kubenswrapper[4877]: E1211 18:15:07.325140 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2" podUID="105c0e29-3d26-49ee-83f1-9ac47ec17cfd" Dec 11 18:15:07 crc kubenswrapper[4877]: E1211 18:15:07.325605 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5" podUID="f7fa49af-7b01-4972-aec4-5b2b42dee85f" Dec 11 18:15:07 crc kubenswrapper[4877]: E1211 18:15:07.326415 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd" podUID="2fb4fbf5-e490-43ae-b7c5-8a2e481f7209" Dec 11 18:15:08 crc kubenswrapper[4877]: I1211 18:15:08.857555 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" Dec 11 18:15:08 crc kubenswrapper[4877]: I1211 18:15:08.966235 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4de7ea9a-363d-43ef-82ee-39d39cd1261e-secret-volume\") pod \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\" (UID: \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\") " Dec 11 18:15:08 crc kubenswrapper[4877]: I1211 18:15:08.966518 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw5cd\" (UniqueName: \"kubernetes.io/projected/4de7ea9a-363d-43ef-82ee-39d39cd1261e-kube-api-access-dw5cd\") pod \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\" (UID: \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\") " Dec 11 18:15:08 crc kubenswrapper[4877]: I1211 18:15:08.966561 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4de7ea9a-363d-43ef-82ee-39d39cd1261e-config-volume\") pod \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\" (UID: \"4de7ea9a-363d-43ef-82ee-39d39cd1261e\") " Dec 11 18:15:08 crc kubenswrapper[4877]: I1211 18:15:08.967790 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de7ea9a-363d-43ef-82ee-39d39cd1261e-config-volume" (OuterVolumeSpecName: "config-volume") pod "4de7ea9a-363d-43ef-82ee-39d39cd1261e" (UID: "4de7ea9a-363d-43ef-82ee-39d39cd1261e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:15:08 crc kubenswrapper[4877]: I1211 18:15:08.973630 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de7ea9a-363d-43ef-82ee-39d39cd1261e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4de7ea9a-363d-43ef-82ee-39d39cd1261e" (UID: "4de7ea9a-363d-43ef-82ee-39d39cd1261e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:15:08 crc kubenswrapper[4877]: I1211 18:15:08.975603 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de7ea9a-363d-43ef-82ee-39d39cd1261e-kube-api-access-dw5cd" (OuterVolumeSpecName: "kube-api-access-dw5cd") pod "4de7ea9a-363d-43ef-82ee-39d39cd1261e" (UID: "4de7ea9a-363d-43ef-82ee-39d39cd1261e"). InnerVolumeSpecName "kube-api-access-dw5cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:15:09 crc kubenswrapper[4877]: I1211 18:15:09.068259 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw5cd\" (UniqueName: \"kubernetes.io/projected/4de7ea9a-363d-43ef-82ee-39d39cd1261e-kube-api-access-dw5cd\") on node \"crc\" DevicePath \"\"" Dec 11 18:15:09 crc kubenswrapper[4877]: I1211 18:15:09.068782 4877 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4de7ea9a-363d-43ef-82ee-39d39cd1261e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 18:15:09 crc kubenswrapper[4877]: I1211 18:15:09.068797 4877 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4de7ea9a-363d-43ef-82ee-39d39cd1261e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 18:15:09 crc kubenswrapper[4877]: I1211 18:15:09.345294 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" event={"ID":"4de7ea9a-363d-43ef-82ee-39d39cd1261e","Type":"ContainerDied","Data":"8081af60c0c537659d64a753ce304d40a6d650cc55de0b907811700752cc78fc"} Dec 11 18:15:09 crc kubenswrapper[4877]: I1211 18:15:09.345349 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8081af60c0c537659d64a753ce304d40a6d650cc55de0b907811700752cc78fc" Dec 11 18:15:09 crc kubenswrapper[4877]: I1211 18:15:09.345437 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs" Dec 11 18:15:18 crc kubenswrapper[4877]: E1211 18:15:18.060852 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 11 18:15:18 crc kubenswrapper[4877]: E1211 18:15:18.061780 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rfblf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-qn5w6_openstack-operators(243bfab6-eced-4740-87ce-ab61441881f5): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 11 18:15:18 crc kubenswrapper[4877]: E1211 18:15:18.063268 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" podUID="243bfab6-eced-4740-87ce-ab61441881f5" Dec 11 18:15:18 crc kubenswrapper[4877]: I1211 18:15:18.418982 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" Dec 11 18:15:18 crc kubenswrapper[4877]: E1211 18:15:18.420526 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" podUID="243bfab6-eced-4740-87ce-ab61441881f5" Dec 11 18:15:18 crc kubenswrapper[4877]: I1211 18:15:18.421964 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" Dec 11 18:15:18 crc kubenswrapper[4877]: I1211 18:15:18.917694 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2" Dec 11 18:15:19 crc kubenswrapper[4877]: I1211 18:15:19.087938 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd" Dec 11 18:15:19 crc kubenswrapper[4877]: I1211 18:15:19.367509 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5" Dec 11 18:15:19 crc kubenswrapper[4877]: E1211 18:15:19.431038 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" podUID="243bfab6-eced-4740-87ce-ab61441881f5" Dec 11 18:15:20 crc kubenswrapper[4877]: E1211 18:15:20.438166 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" podUID="243bfab6-eced-4740-87ce-ab61441881f5" Dec 11 18:15:20 crc kubenswrapper[4877]: I1211 18:15:20.883027 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert\") pod \"infra-operator-controller-manager-6797f5b887-q9vgk\" (UID: \"53a860ae-4169-4f47-8ba7-032c96b4be3a\") " pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:15:20 crc kubenswrapper[4877]: I1211 18:15:20.891883 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/53a860ae-4169-4f47-8ba7-032c96b4be3a-cert\") pod \"infra-operator-controller-manager-6797f5b887-q9vgk\" (UID: \"53a860ae-4169-4f47-8ba7-032c96b4be3a\") " pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:15:21 crc kubenswrapper[4877]: I1211 18:15:21.085486 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f4zhrq\" (UID: \"e525cb88-4985-4374-a7f8-185c016e4a14\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:15:21 crc kubenswrapper[4877]: I1211 18:15:21.093358 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e525cb88-4985-4374-a7f8-185c016e4a14-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f4zhrq\" (UID: \"e525cb88-4985-4374-a7f8-185c016e4a14\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:15:21 crc kubenswrapper[4877]: I1211 18:15:21.136944 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:15:21 crc kubenswrapper[4877]: I1211 18:15:21.146049 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:15:21 crc kubenswrapper[4877]: I1211 18:15:21.493754 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:15:21 crc kubenswrapper[4877]: I1211 18:15:21.493818 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:15:21 crc kubenswrapper[4877]: I1211 18:15:21.511710 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-metrics-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:15:21 crc kubenswrapper[4877]: I1211 18:15:21.514316 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c21c4469-97a3-47c7-bced-d7d18aa1008a-webhook-certs\") pod \"openstack-operator-controller-manager-545595b497-h5vf4\" (UID: \"c21c4469-97a3-47c7-bced-d7d18aa1008a\") " pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:15:21 crc kubenswrapper[4877]: I1211 18:15:21.747637 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:15:28 crc kubenswrapper[4877]: I1211 18:15:28.261162 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq"] Dec 11 18:15:28 crc kubenswrapper[4877]: I1211 18:15:28.390763 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4"] Dec 11 18:15:28 crc kubenswrapper[4877]: I1211 18:15:28.460344 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk"] Dec 11 18:15:29 crc kubenswrapper[4877]: I1211 18:15:29.507207 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-p97xh" event={"ID":"efcf4499-fc58-4b4c-b047-c397b6154e38","Type":"ContainerStarted","Data":"f6cdd17b2642da81bfecdca7439985520f6727eb2b72e984a6db28184fe51dbd"} Dec 11 18:15:29 crc kubenswrapper[4877]: I1211 18:15:29.509037 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl" event={"ID":"ca3c6f4e-3491-4109-bf60-f4efbad58bc1","Type":"ContainerStarted","Data":"e8a18d3d682693506a64ce4c96920f62876603b2105b95e5c56009f3c2757ce9"} Dec 11 18:15:29 crc kubenswrapper[4877]: I1211 18:15:29.510414 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65" event={"ID":"099a6c32-cea0-4cea-b763-f60ba3e867e7","Type":"ContainerStarted","Data":"bcbfd535d09d3e05aea1d3ceb47c1bd5faa765dcf66d7219e2fa9ee96b9100c5"} Dec 11 18:15:29 crc kubenswrapper[4877]: I1211 18:15:29.512057 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht" event={"ID":"a571f5fa-fa44-48fd-b675-f0b42607ac7d","Type":"ContainerStarted","Data":"77807e8173aac561dfcdb4d4eb9b9072c0d11f4d9aacd071cac1dd31266ef253"} Dec 11 18:15:29 crc kubenswrapper[4877]: W1211 18:15:29.733792 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode525cb88_4985_4374_a7f8_185c016e4a14.slice/crio-3b4d669b093e9501c3387aec380091e6c8d2a5e27b7566afd441977762e3bfe8 WatchSource:0}: Error finding container 3b4d669b093e9501c3387aec380091e6c8d2a5e27b7566afd441977762e3bfe8: Status 404 returned error can't find the container with id 3b4d669b093e9501c3387aec380091e6c8d2a5e27b7566afd441977762e3bfe8 Dec 11 18:15:29 crc kubenswrapper[4877]: E1211 18:15:29.835102 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 11 18:15:29 crc kubenswrapper[4877]: E1211 18:15:29.835364 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7bgct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-nprs8_openstack-operators(e0378195-6809-4f5c-b9f3-a37177789ee5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:15:29 crc kubenswrapper[4877]: E1211 18:15:29.836468 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8" podUID="e0378195-6809-4f5c-b9f3-a37177789ee5" Dec 11 18:15:29 crc kubenswrapper[4877]: E1211 18:15:29.901659 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 11 18:15:29 crc kubenswrapper[4877]: E1211 18:15:29.901818 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fg7qb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-7pc7q_openstack-operators(1ebbc540-13e3-4fee-a9b7-10bb95da50b9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:15:29 crc kubenswrapper[4877]: E1211 18:15:29.903484 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q" podUID="1ebbc540-13e3-4fee-a9b7-10bb95da50b9" Dec 11 18:15:30 crc kubenswrapper[4877]: E1211 18:15:30.010602 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 11 18:15:30 crc kubenswrapper[4877]: E1211 18:15:30.010828 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wp9m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-c4nfg_openstack-operators(fd25a8fc-0f52-4795-8a65-debdfdf452b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:15:30 crc kubenswrapper[4877]: E1211 18:15:30.012066 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg" podUID="fd25a8fc-0f52-4795-8a65-debdfdf452b3" Dec 11 18:15:30 crc kubenswrapper[4877]: E1211 18:15:30.017654 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 11 18:15:30 crc kubenswrapper[4877]: E1211 18:15:30.017891 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qmzk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-ddr9m_openstack-operators(c3dd6849-836b-462c-abbc-d97418287658): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:15:30 crc kubenswrapper[4877]: E1211 18:15:30.021594 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m" podUID="c3dd6849-836b-462c-abbc-d97418287658" Dec 11 18:15:30 crc kubenswrapper[4877]: E1211 18:15:30.109997 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 11 18:15:30 crc kubenswrapper[4877]: E1211 18:15:30.110206 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mp66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-tlsrq_openstack-operators(6dcd317e-41f9-45e8-bd14-77d9f4ae25dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:15:30 crc kubenswrapper[4877]: E1211 18:15:30.111419 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq" podUID="6dcd317e-41f9-45e8-bd14-77d9f4ae25dd" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.535970 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd" event={"ID":"2fb4fbf5-e490-43ae-b7c5-8a2e481f7209","Type":"ContainerStarted","Data":"573ee764814798691a0c8ab364ab1ac79913615d97caf96b433d8da8a42b307c"} Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.539333 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerStarted","Data":"5542695acdbcec3934a2ad9790e8788eeab7107c496dd374e78c919d106f86b2"} Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.544085 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5" event={"ID":"f7fa49af-7b01-4972-aec4-5b2b42dee85f","Type":"ContainerStarted","Data":"cc00de42283a9a996358cf00446b4afc94da145ba66aacb847ab801e1934ef4e"} Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.564921 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-h429c" event={"ID":"444a3f0d-8828-4958-9d25-61f4251d74c4","Type":"ContainerStarted","Data":"6f440a172dc2daafa98ed0219642b2ee44ba0987eb349d86116dc05dd220ffe7"} Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.566128 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-h429c" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.574553 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-h429c" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.575542 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-mcjpd" podStartSLOduration=2.8982483610000003 podStartE2EDuration="42.575512172s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.255515212 +0000 UTC m=+851.281759256" lastFinishedPulling="2025-12-11 18:15:29.932779023 +0000 UTC m=+890.959023067" observedRunningTime="2025-12-11 18:15:30.555592327 +0000 UTC m=+891.581836391" watchObservedRunningTime="2025-12-11 18:15:30.575512172 +0000 UTC m=+891.601756226" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.599276 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-q25n5" podStartSLOduration=3.077155816 podStartE2EDuration="42.599253341s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.400778468 +0000 UTC m=+851.427022512" lastFinishedPulling="2025-12-11 18:15:29.922875993 +0000 UTC m=+890.949120037" observedRunningTime="2025-12-11 18:15:30.591921171 +0000 UTC m=+891.618165215" watchObservedRunningTime="2025-12-11 18:15:30.599253341 +0000 UTC m=+891.625497385" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.607422 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b" event={"ID":"105e535b-6aee-4187-9008-65a41e6e3572","Type":"ContainerStarted","Data":"31c12b1d75393e71a6eabcec3bb180fa6083994de584678fbb7275e452b8733a"} Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.607494 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b" event={"ID":"105e535b-6aee-4187-9008-65a41e6e3572","Type":"ContainerStarted","Data":"9d4c33518fe8973130fd485957c1bcaff22a349fadaf9ac3ad7941cdcd60faf5"} Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.608252 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.647236 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7cg2z" event={"ID":"15a74efd-e36a-4946-a9e5-2453c98355aa","Type":"ContainerStarted","Data":"e487f8fdd18328ac509e663831cb5d2487651bf41f4ea9a30338484add67ca99"} Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.680967 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" event={"ID":"c21c4469-97a3-47c7-bced-d7d18aa1008a","Type":"ContainerStarted","Data":"3db9fbec000b27fe21835aafc555ff6d8c1fbfa226cd9bb49d10f542ea74e589"} Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.681026 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" event={"ID":"c21c4469-97a3-47c7-bced-d7d18aa1008a","Type":"ContainerStarted","Data":"c2b86e5ce2efa81d380311c645d2d3b0de8ab650a4f557cfba71ff12c60ec60c"} Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.681866 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.692648 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" event={"ID":"e525cb88-4985-4374-a7f8-185c016e4a14","Type":"ContainerStarted","Data":"3b4d669b093e9501c3387aec380091e6c8d2a5e27b7566afd441977762e3bfe8"} Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.702130 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-h429c" podStartSLOduration=2.547762048 podStartE2EDuration="42.702105826s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:49.993303615 +0000 UTC m=+851.019547659" lastFinishedPulling="2025-12-11 18:15:30.147647393 +0000 UTC m=+891.173891437" observedRunningTime="2025-12-11 18:15:30.640187491 +0000 UTC m=+891.666431535" watchObservedRunningTime="2025-12-11 18:15:30.702105826 +0000 UTC m=+891.728349870" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.704630 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2" event={"ID":"105c0e29-3d26-49ee-83f1-9ac47ec17cfd","Type":"ContainerStarted","Data":"80b6b03a093dd740b88ace3ed0dd58906773ebeca9b206985bd211d04b152118"} Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.730633 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-zm6ts" event={"ID":"35acba78-7a75-40d7-b5fb-43d595c3bc1f","Type":"ContainerStarted","Data":"4824a3ae243b71543dd40ba90f8ab18f8948a90d559671e47da875039c1b8517"} Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.731890 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-zm6ts" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.736720 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-zm6ts" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.739145 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b" podStartSLOduration=14.545902778 podStartE2EDuration="41.739120489s" podCreationTimestamp="2025-12-11 18:14:49 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.787769983 +0000 UTC m=+851.814014027" lastFinishedPulling="2025-12-11 18:15:17.980987704 +0000 UTC m=+879.007231738" observedRunningTime="2025-12-11 18:15:30.687800684 +0000 UTC m=+891.714044728" watchObservedRunningTime="2025-12-11 18:15:30.739120489 +0000 UTC m=+891.765364543" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.739627 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" podStartSLOduration=41.739619782 podStartE2EDuration="41.739619782s" podCreationTimestamp="2025-12-11 18:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:15:30.724273372 +0000 UTC m=+891.750517426" watchObservedRunningTime="2025-12-11 18:15:30.739619782 +0000 UTC m=+891.765863836" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.756234 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-nf9dm" event={"ID":"3670e0c0-f188-4f22-8097-52f0a00b3a47","Type":"ContainerStarted","Data":"559e35f2d46daf91c925aff81a630401267f64d4c8fd3c6a6123e1956c1ef7ec"} Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.756286 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-nf9dm" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.756995 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.757020 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.757622 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.757659 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.770133 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.770221 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.770346 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-nf9dm" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.770438 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.792225 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7cg2z" podStartSLOduration=14.40835383 podStartE2EDuration="41.792195381s" podCreationTimestamp="2025-12-11 18:14:49 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.760037131 +0000 UTC m=+851.786281175" lastFinishedPulling="2025-12-11 18:15:18.143878642 +0000 UTC m=+879.170122726" observedRunningTime="2025-12-11 18:15:30.762949571 +0000 UTC m=+891.789193615" watchObservedRunningTime="2025-12-11 18:15:30.792195381 +0000 UTC m=+891.818439425" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.806690 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.810850 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-nf9dm" podStartSLOduration=2.919200226 podStartE2EDuration="42.810826801s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.052107668 +0000 UTC m=+851.078351712" lastFinishedPulling="2025-12-11 18:15:29.943734243 +0000 UTC m=+890.969978287" observedRunningTime="2025-12-11 18:15:30.806127232 +0000 UTC m=+891.832371276" watchObservedRunningTime="2025-12-11 18:15:30.810826801 +0000 UTC m=+891.837070845" Dec 11 18:15:30 crc kubenswrapper[4877]: I1211 18:15:30.909661 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-zm6ts" podStartSLOduration=2.757973412 podStartE2EDuration="41.909628525s" podCreationTimestamp="2025-12-11 18:14:49 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.79644981 +0000 UTC m=+851.822693854" lastFinishedPulling="2025-12-11 18:15:29.948104933 +0000 UTC m=+890.974348967" observedRunningTime="2025-12-11 18:15:30.886983295 +0000 UTC m=+891.913227339" watchObservedRunningTime="2025-12-11 18:15:30.909628525 +0000 UTC m=+891.935872599" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.037922 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-hsvv2" podStartSLOduration=3.0830978079999998 podStartE2EDuration="43.037902895s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:49.992936946 +0000 UTC m=+851.019180990" lastFinishedPulling="2025-12-11 18:15:29.947742033 +0000 UTC m=+890.973986077" observedRunningTime="2025-12-11 18:15:31.036317841 +0000 UTC m=+892.062561895" watchObservedRunningTime="2025-12-11 18:15:31.037902895 +0000 UTC m=+892.064146939" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.783162 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q" event={"ID":"1ebbc540-13e3-4fee-a9b7-10bb95da50b9","Type":"ContainerStarted","Data":"20b94e35828b809dc84e4289c50354a29049c8bd591b060ee11e2afcf0ecdbc6"} Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.795232 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wldgs" event={"ID":"8675dcf8-097e-4927-aa50-827f3034af41","Type":"ContainerStarted","Data":"37da0c90987f0f93b8621a05fb194ad6268600e6b714c234b9c31f3a42c2a347"} Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.796246 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wldgs" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.800977 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wldgs" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.809327 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-7pc7q" podStartSLOduration=29.309545992 podStartE2EDuration="43.809309194s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.416473473 +0000 UTC m=+851.442717517" lastFinishedPulling="2025-12-11 18:15:04.916236675 +0000 UTC m=+865.942480719" observedRunningTime="2025-12-11 18:15:31.80440213 +0000 UTC m=+892.830646184" watchObservedRunningTime="2025-12-11 18:15:31.809309194 +0000 UTC m=+892.835553228" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.810260 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl" event={"ID":"ca3c6f4e-3491-4109-bf60-f4efbad58bc1","Type":"ContainerStarted","Data":"5e88b1c8d0acde4b1c8bfc57bc286147fa4882b3fb1219fa92b23e9f00c425b6"} Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.816813 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.828795 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65" event={"ID":"099a6c32-cea0-4cea-b763-f60ba3e867e7","Type":"ContainerStarted","Data":"7eefc2fbd3fdb9e838597f0ca0c77ef404c74bc437020ce6862dbe1f712c9d94"} Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.829780 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.838009 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wldgs" podStartSLOduration=4.196055029 podStartE2EDuration="43.837986419s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.596771805 +0000 UTC m=+851.623015849" lastFinishedPulling="2025-12-11 18:15:30.238703195 +0000 UTC m=+891.264947239" observedRunningTime="2025-12-11 18:15:31.827703387 +0000 UTC m=+892.853947431" watchObservedRunningTime="2025-12-11 18:15:31.837986419 +0000 UTC m=+892.864230473" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.847305 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq" event={"ID":"6dcd317e-41f9-45e8-bd14-77d9f4ae25dd","Type":"ContainerStarted","Data":"3927975e488ed1c4d6f2060b1a3db9efeabdb599a8fd8852eb1119d0f5024b28"} Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.849252 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl" podStartSLOduration=4.08386969 podStartE2EDuration="43.849224926s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.630709835 +0000 UTC m=+851.656953879" lastFinishedPulling="2025-12-11 18:15:30.396065071 +0000 UTC m=+891.422309115" observedRunningTime="2025-12-11 18:15:31.841272049 +0000 UTC m=+892.867516093" watchObservedRunningTime="2025-12-11 18:15:31.849224926 +0000 UTC m=+892.875468990" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.851116 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg" event={"ID":"fd25a8fc-0f52-4795-8a65-debdfdf452b3","Type":"ContainerStarted","Data":"45581a41931197d3152c2a993d193e35cb232e8df9766ee09f34978e804a196c"} Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.859996 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-p97xh" event={"ID":"efcf4499-fc58-4b4c-b047-c397b6154e38","Type":"ContainerStarted","Data":"e649b55f0cb1f9f6ffb141adb79951623f2b2227eccbe17eaa69b414bfee7c79"} Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.860572 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-p97xh" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.861847 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8" event={"ID":"e0378195-6809-4f5c-b9f3-a37177789ee5","Type":"ContainerStarted","Data":"b31e5627471edcc1d50538841eeb7672c2b9102cf02098c4ee25f0e68a92db76"} Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.863419 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht" event={"ID":"a571f5fa-fa44-48fd-b675-f0b42607ac7d","Type":"ContainerStarted","Data":"ab668a5aa98188edca850746c9a272e0db24b2743e376407c6fabaac6d714018"} Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.863824 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.864941 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xtk9z" event={"ID":"18dda364-66d0-47d3-8c03-4b0ecb73a634","Type":"ContainerStarted","Data":"5fd4c294ca21139bc212e8349fb0ecd89256adf98c31b476db286ac8e5365833"} Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.866038 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xtk9z" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.869232 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m" event={"ID":"c3dd6849-836b-462c-abbc-d97418287658","Type":"ContainerStarted","Data":"006a746f5aa0048e6bec76b9060775ac74425934c2d07d5ddcd11e9390cf162e"} Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.887682 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xtk9z" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.892650 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65" podStartSLOduration=4.122904763 podStartE2EDuration="43.892627574s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.640907809 +0000 UTC m=+851.667151853" lastFinishedPulling="2025-12-11 18:15:30.41063062 +0000 UTC m=+891.436874664" observedRunningTime="2025-12-11 18:15:31.88807969 +0000 UTC m=+892.914323744" watchObservedRunningTime="2025-12-11 18:15:31.892627574 +0000 UTC m=+892.918871618" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.921999 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht" podStartSLOduration=4.358888868 podStartE2EDuration="43.921972367s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.798764896 +0000 UTC m=+851.825008940" lastFinishedPulling="2025-12-11 18:15:30.361848395 +0000 UTC m=+891.388092439" observedRunningTime="2025-12-11 18:15:31.91257614 +0000 UTC m=+892.938820184" watchObservedRunningTime="2025-12-11 18:15:31.921972367 +0000 UTC m=+892.948216411" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.948456 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-ddr9m" podStartSLOduration=29.465849712 podStartE2EDuration="43.948431941s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.423978732 +0000 UTC m=+851.450222776" lastFinishedPulling="2025-12-11 18:15:04.906560961 +0000 UTC m=+865.932805005" observedRunningTime="2025-12-11 18:15:31.948067181 +0000 UTC m=+892.974311235" watchObservedRunningTime="2025-12-11 18:15:31.948431941 +0000 UTC m=+892.974675995" Dec 11 18:15:31 crc kubenswrapper[4877]: I1211 18:15:31.986685 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-c4nfg" podStartSLOduration=29.158985516 podStartE2EDuration="43.986659167s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.112298044 +0000 UTC m=+851.138542078" lastFinishedPulling="2025-12-11 18:15:04.939971685 +0000 UTC m=+865.966215729" observedRunningTime="2025-12-11 18:15:31.977216989 +0000 UTC m=+893.003461043" watchObservedRunningTime="2025-12-11 18:15:31.986659167 +0000 UTC m=+893.012903211" Dec 11 18:15:32 crc kubenswrapper[4877]: I1211 18:15:32.063264 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-xtk9z" podStartSLOduration=3.810033149 podStartE2EDuration="44.063236023s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.158845265 +0000 UTC m=+851.185089309" lastFinishedPulling="2025-12-11 18:15:30.412048149 +0000 UTC m=+891.438292183" observedRunningTime="2025-12-11 18:15:32.059288615 +0000 UTC m=+893.085532679" watchObservedRunningTime="2025-12-11 18:15:32.063236023 +0000 UTC m=+893.089480067" Dec 11 18:15:32 crc kubenswrapper[4877]: I1211 18:15:32.065927 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-nprs8" podStartSLOduration=29.561072951 podStartE2EDuration="44.065920526s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.408896562 +0000 UTC m=+851.435140606" lastFinishedPulling="2025-12-11 18:15:04.913744137 +0000 UTC m=+865.939988181" observedRunningTime="2025-12-11 18:15:32.021961173 +0000 UTC m=+893.048205237" watchObservedRunningTime="2025-12-11 18:15:32.065920526 +0000 UTC m=+893.092164580" Dec 11 18:15:32 crc kubenswrapper[4877]: I1211 18:15:32.091784 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-p97xh" podStartSLOduration=4.437549937 podStartE2EDuration="44.091759123s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.642475746 +0000 UTC m=+851.668719790" lastFinishedPulling="2025-12-11 18:15:30.296684932 +0000 UTC m=+891.322928976" observedRunningTime="2025-12-11 18:15:32.090835158 +0000 UTC m=+893.117079202" watchObservedRunningTime="2025-12-11 18:15:32.091759123 +0000 UTC m=+893.118003167" Dec 11 18:15:32 crc kubenswrapper[4877]: I1211 18:15:32.880456 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq" event={"ID":"6dcd317e-41f9-45e8-bd14-77d9f4ae25dd","Type":"ContainerStarted","Data":"85e193059da9d31ac1524b377e57b6a880751130acf093469b215946cdc409b9"} Dec 11 18:15:32 crc kubenswrapper[4877]: I1211 18:15:32.907747 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq" podStartSLOduration=3.83714803 podStartE2EDuration="44.907727631s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.416169346 +0000 UTC m=+851.442413390" lastFinishedPulling="2025-12-11 18:15:31.486748947 +0000 UTC m=+892.512992991" observedRunningTime="2025-12-11 18:15:32.902182649 +0000 UTC m=+893.928426713" watchObservedRunningTime="2025-12-11 18:15:32.907727631 +0000 UTC m=+893.933971675" Dec 11 18:15:33 crc kubenswrapper[4877]: I1211 18:15:33.891462 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq" Dec 11 18:15:33 crc kubenswrapper[4877]: I1211 18:15:33.893273 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-d77ht" Dec 11 18:15:33 crc kubenswrapper[4877]: I1211 18:15:33.894162 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-8vf65" Dec 11 18:15:33 crc kubenswrapper[4877]: I1211 18:15:33.894209 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-p97xh" Dec 11 18:15:34 crc kubenswrapper[4877]: I1211 18:15:34.901226 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" event={"ID":"e525cb88-4985-4374-a7f8-185c016e4a14","Type":"ContainerStarted","Data":"1203d12f13c2a36b178b8fd17f0fe78f42f2657b269be961c87c8ed705d2911b"} Dec 11 18:15:34 crc kubenswrapper[4877]: I1211 18:15:34.901586 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:15:34 crc kubenswrapper[4877]: I1211 18:15:34.901603 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" event={"ID":"e525cb88-4985-4374-a7f8-185c016e4a14","Type":"ContainerStarted","Data":"40ae27b80bf34b2f1362d26f93f516f1a9669ccf996e6aecf0e6e34dc42aaa4c"} Dec 11 18:15:34 crc kubenswrapper[4877]: I1211 18:15:34.904362 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" event={"ID":"243bfab6-eced-4740-87ce-ab61441881f5","Type":"ContainerStarted","Data":"8ba2598387014132126e4662f7ee29444fe122870b849113663984bfbdb1682a"} Dec 11 18:15:34 crc kubenswrapper[4877]: I1211 18:15:34.907168 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerStarted","Data":"c8fa98882565b3a7f59ccbd9ff894146daa7e40663596cb02082b1fd06663f51"} Dec 11 18:15:34 crc kubenswrapper[4877]: I1211 18:15:34.907857 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:15:34 crc kubenswrapper[4877]: I1211 18:15:34.907946 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerStarted","Data":"67127e95e2041d1a7d7ee10db0a8ac49279a0dfa1be2848da530f261425492c9"} Dec 11 18:15:34 crc kubenswrapper[4877]: I1211 18:15:34.935207 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" podStartSLOduration=42.425567728 podStartE2EDuration="46.935180431s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:15:29.838313288 +0000 UTC m=+890.864557322" lastFinishedPulling="2025-12-11 18:15:34.347925981 +0000 UTC m=+895.374170025" observedRunningTime="2025-12-11 18:15:34.929478375 +0000 UTC m=+895.955722419" watchObservedRunningTime="2025-12-11 18:15:34.935180431 +0000 UTC m=+895.961424475" Dec 11 18:15:34 crc kubenswrapper[4877]: I1211 18:15:34.960435 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-qn5w6" podStartSLOduration=32.647502693 podStartE2EDuration="46.960409582s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:14:50.624926007 +0000 UTC m=+851.651170051" lastFinishedPulling="2025-12-11 18:15:04.937832896 +0000 UTC m=+865.964076940" observedRunningTime="2025-12-11 18:15:34.955470256 +0000 UTC m=+895.981714300" watchObservedRunningTime="2025-12-11 18:15:34.960409582 +0000 UTC m=+895.986653636" Dec 11 18:15:34 crc kubenswrapper[4877]: I1211 18:15:34.991533 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podStartSLOduration=42.481997664 podStartE2EDuration="46.991507963s" podCreationTimestamp="2025-12-11 18:14:48 +0000 UTC" firstStartedPulling="2025-12-11 18:15:29.83835055 +0000 UTC m=+890.864594604" lastFinishedPulling="2025-12-11 18:15:34.347860849 +0000 UTC m=+895.374104903" observedRunningTime="2025-12-11 18:15:34.973732496 +0000 UTC m=+895.999976570" watchObservedRunningTime="2025-12-11 18:15:34.991507963 +0000 UTC m=+896.017752007" Dec 11 18:15:39 crc kubenswrapper[4877]: I1211 18:15:39.111455 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tlsrq" Dec 11 18:15:39 crc kubenswrapper[4877]: I1211 18:15:39.510502 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-dmlvl" Dec 11 18:15:39 crc kubenswrapper[4877]: I1211 18:15:39.919028 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-9sw8b" Dec 11 18:15:41 crc kubenswrapper[4877]: I1211 18:15:41.144628 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:15:41 crc kubenswrapper[4877]: I1211 18:15:41.155631 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f4zhrq" Dec 11 18:15:41 crc kubenswrapper[4877]: I1211 18:15:41.760967 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-545595b497-h5vf4" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.319619 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8nn8w"] Dec 11 18:15:57 crc kubenswrapper[4877]: E1211 18:15:57.320783 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de7ea9a-363d-43ef-82ee-39d39cd1261e" containerName="collect-profiles" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.320802 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de7ea9a-363d-43ef-82ee-39d39cd1261e" containerName="collect-profiles" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.321037 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de7ea9a-363d-43ef-82ee-39d39cd1261e" containerName="collect-profiles" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.322141 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8nn8w" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.326514 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-lnhh5" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.326544 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.326575 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.327718 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.349251 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8nn8w"] Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.380738 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5s4n9"] Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.388956 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.393737 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.402784 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5s4n9"] Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.436337 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a-config\") pod \"dnsmasq-dns-675f4bcbfc-8nn8w\" (UID: \"7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8nn8w" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.436644 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6nhq\" (UniqueName: \"kubernetes.io/projected/7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a-kube-api-access-j6nhq\") pod \"dnsmasq-dns-675f4bcbfc-8nn8w\" (UID: \"7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8nn8w" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.538598 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11bcf070-144c-4f9e-bdec-e7e72a34be60-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5s4n9\" (UID: \"11bcf070-144c-4f9e-bdec-e7e72a34be60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.538661 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11bcf070-144c-4f9e-bdec-e7e72a34be60-config\") pod \"dnsmasq-dns-78dd6ddcc-5s4n9\" (UID: \"11bcf070-144c-4f9e-bdec-e7e72a34be60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.538700 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a-config\") pod \"dnsmasq-dns-675f4bcbfc-8nn8w\" (UID: \"7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8nn8w" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.538739 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gs5s\" (UniqueName: \"kubernetes.io/projected/11bcf070-144c-4f9e-bdec-e7e72a34be60-kube-api-access-6gs5s\") pod \"dnsmasq-dns-78dd6ddcc-5s4n9\" (UID: \"11bcf070-144c-4f9e-bdec-e7e72a34be60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.538769 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6nhq\" (UniqueName: \"kubernetes.io/projected/7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a-kube-api-access-j6nhq\") pod \"dnsmasq-dns-675f4bcbfc-8nn8w\" (UID: \"7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8nn8w" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.540169 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a-config\") pod \"dnsmasq-dns-675f4bcbfc-8nn8w\" (UID: \"7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8nn8w" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.562613 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6nhq\" (UniqueName: \"kubernetes.io/projected/7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a-kube-api-access-j6nhq\") pod \"dnsmasq-dns-675f4bcbfc-8nn8w\" (UID: \"7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8nn8w" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.640509 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11bcf070-144c-4f9e-bdec-e7e72a34be60-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5s4n9\" (UID: \"11bcf070-144c-4f9e-bdec-e7e72a34be60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.640581 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11bcf070-144c-4f9e-bdec-e7e72a34be60-config\") pod \"dnsmasq-dns-78dd6ddcc-5s4n9\" (UID: \"11bcf070-144c-4f9e-bdec-e7e72a34be60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.640644 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gs5s\" (UniqueName: \"kubernetes.io/projected/11bcf070-144c-4f9e-bdec-e7e72a34be60-kube-api-access-6gs5s\") pod \"dnsmasq-dns-78dd6ddcc-5s4n9\" (UID: \"11bcf070-144c-4f9e-bdec-e7e72a34be60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.642454 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11bcf070-144c-4f9e-bdec-e7e72a34be60-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5s4n9\" (UID: \"11bcf070-144c-4f9e-bdec-e7e72a34be60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.643182 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11bcf070-144c-4f9e-bdec-e7e72a34be60-config\") pod \"dnsmasq-dns-78dd6ddcc-5s4n9\" (UID: \"11bcf070-144c-4f9e-bdec-e7e72a34be60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.650766 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8nn8w" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.673835 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gs5s\" (UniqueName: \"kubernetes.io/projected/11bcf070-144c-4f9e-bdec-e7e72a34be60-kube-api-access-6gs5s\") pod \"dnsmasq-dns-78dd6ddcc-5s4n9\" (UID: \"11bcf070-144c-4f9e-bdec-e7e72a34be60\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.717260 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.974423 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5s4n9"] Dec 11 18:15:57 crc kubenswrapper[4877]: W1211 18:15:57.979063 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11bcf070_144c_4f9e_bdec_e7e72a34be60.slice/crio-f33686c97a4ade1b451366d5df359de9440bf720f0067beb41360ce045a2e43d WatchSource:0}: Error finding container f33686c97a4ade1b451366d5df359de9440bf720f0067beb41360ce045a2e43d: Status 404 returned error can't find the container with id f33686c97a4ade1b451366d5df359de9440bf720f0067beb41360ce045a2e43d Dec 11 18:15:57 crc kubenswrapper[4877]: I1211 18:15:57.981263 4877 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 18:15:58 crc kubenswrapper[4877]: I1211 18:15:58.132205 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" event={"ID":"11bcf070-144c-4f9e-bdec-e7e72a34be60","Type":"ContainerStarted","Data":"f33686c97a4ade1b451366d5df359de9440bf720f0067beb41360ce045a2e43d"} Dec 11 18:15:58 crc kubenswrapper[4877]: I1211 18:15:58.134504 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8nn8w"] Dec 11 18:15:58 crc kubenswrapper[4877]: W1211 18:15:58.144210 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b37cd21_2c17_4f1b_af04_2f0f4dcf9c9a.slice/crio-6ace50ab1ee0e625c8773f88400e367dc3961ceb306ed8359f9f912706d1f681 WatchSource:0}: Error finding container 6ace50ab1ee0e625c8773f88400e367dc3961ceb306ed8359f9f912706d1f681: Status 404 returned error can't find the container with id 6ace50ab1ee0e625c8773f88400e367dc3961ceb306ed8359f9f912706d1f681 Dec 11 18:15:59 crc kubenswrapper[4877]: I1211 18:15:59.158769 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8nn8w" event={"ID":"7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a","Type":"ContainerStarted","Data":"6ace50ab1ee0e625c8773f88400e367dc3961ceb306ed8359f9f912706d1f681"} Dec 11 18:16:00 crc kubenswrapper[4877]: I1211 18:16:00.591321 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8nn8w"] Dec 11 18:16:00 crc kubenswrapper[4877]: I1211 18:16:00.630108 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-cp8fq"] Dec 11 18:16:00 crc kubenswrapper[4877]: I1211 18:16:00.632334 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:00 crc kubenswrapper[4877]: I1211 18:16:00.649002 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-cp8fq"] Dec 11 18:16:00 crc kubenswrapper[4877]: I1211 18:16:00.789768 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e04dda18-fffa-489e-8c1f-b27530036418-dns-svc\") pod \"dnsmasq-dns-666b6646f7-cp8fq\" (UID: \"e04dda18-fffa-489e-8c1f-b27530036418\") " pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:00 crc kubenswrapper[4877]: I1211 18:16:00.789844 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e04dda18-fffa-489e-8c1f-b27530036418-config\") pod \"dnsmasq-dns-666b6646f7-cp8fq\" (UID: \"e04dda18-fffa-489e-8c1f-b27530036418\") " pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:00 crc kubenswrapper[4877]: I1211 18:16:00.789890 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcvgf\" (UniqueName: \"kubernetes.io/projected/e04dda18-fffa-489e-8c1f-b27530036418-kube-api-access-lcvgf\") pod \"dnsmasq-dns-666b6646f7-cp8fq\" (UID: \"e04dda18-fffa-489e-8c1f-b27530036418\") " pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:00 crc kubenswrapper[4877]: I1211 18:16:00.891760 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcvgf\" (UniqueName: \"kubernetes.io/projected/e04dda18-fffa-489e-8c1f-b27530036418-kube-api-access-lcvgf\") pod \"dnsmasq-dns-666b6646f7-cp8fq\" (UID: \"e04dda18-fffa-489e-8c1f-b27530036418\") " pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:00 crc kubenswrapper[4877]: I1211 18:16:00.891871 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e04dda18-fffa-489e-8c1f-b27530036418-dns-svc\") pod \"dnsmasq-dns-666b6646f7-cp8fq\" (UID: \"e04dda18-fffa-489e-8c1f-b27530036418\") " pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:00 crc kubenswrapper[4877]: I1211 18:16:00.891894 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e04dda18-fffa-489e-8c1f-b27530036418-config\") pod \"dnsmasq-dns-666b6646f7-cp8fq\" (UID: \"e04dda18-fffa-489e-8c1f-b27530036418\") " pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:00 crc kubenswrapper[4877]: I1211 18:16:00.893171 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e04dda18-fffa-489e-8c1f-b27530036418-dns-svc\") pod \"dnsmasq-dns-666b6646f7-cp8fq\" (UID: \"e04dda18-fffa-489e-8c1f-b27530036418\") " pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:00 crc kubenswrapper[4877]: I1211 18:16:00.893877 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e04dda18-fffa-489e-8c1f-b27530036418-config\") pod \"dnsmasq-dns-666b6646f7-cp8fq\" (UID: \"e04dda18-fffa-489e-8c1f-b27530036418\") " pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:00 crc kubenswrapper[4877]: I1211 18:16:00.919695 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcvgf\" (UniqueName: \"kubernetes.io/projected/e04dda18-fffa-489e-8c1f-b27530036418-kube-api-access-lcvgf\") pod \"dnsmasq-dns-666b6646f7-cp8fq\" (UID: \"e04dda18-fffa-489e-8c1f-b27530036418\") " pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:00 crc kubenswrapper[4877]: I1211 18:16:00.954191 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.054451 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5s4n9"] Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.067719 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-j9rql"] Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.069983 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.115663 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-j9rql"] Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.204283 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbzrl\" (UniqueName: \"kubernetes.io/projected/d4d97300-ea2c-4e95-86ae-061bc789ab6d-kube-api-access-fbzrl\") pod \"dnsmasq-dns-57d769cc4f-j9rql\" (UID: \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.204342 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d97300-ea2c-4e95-86ae-061bc789ab6d-config\") pod \"dnsmasq-dns-57d769cc4f-j9rql\" (UID: \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.204363 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4d97300-ea2c-4e95-86ae-061bc789ab6d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-j9rql\" (UID: \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.307014 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbzrl\" (UniqueName: \"kubernetes.io/projected/d4d97300-ea2c-4e95-86ae-061bc789ab6d-kube-api-access-fbzrl\") pod \"dnsmasq-dns-57d769cc4f-j9rql\" (UID: \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.307098 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d97300-ea2c-4e95-86ae-061bc789ab6d-config\") pod \"dnsmasq-dns-57d769cc4f-j9rql\" (UID: \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.307126 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4d97300-ea2c-4e95-86ae-061bc789ab6d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-j9rql\" (UID: \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.308482 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4d97300-ea2c-4e95-86ae-061bc789ab6d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-j9rql\" (UID: \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.317681 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d97300-ea2c-4e95-86ae-061bc789ab6d-config\") pod \"dnsmasq-dns-57d769cc4f-j9rql\" (UID: \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.338340 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbzrl\" (UniqueName: \"kubernetes.io/projected/d4d97300-ea2c-4e95-86ae-061bc789ab6d-kube-api-access-fbzrl\") pod \"dnsmasq-dns-57d769cc4f-j9rql\" (UID: \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.343479 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-cp8fq"] Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.409188 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.860036 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.862449 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.866615 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.866793 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.867352 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4p7d9" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.867669 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.867864 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.868012 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.868022 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.879064 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.916751 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.916805 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.916827 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18fc032b-7957-4e94-929a-47c04d67b45f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.916860 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.916907 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.917324 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb62v\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-kube-api-access-vb62v\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.917368 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.917428 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-config-data\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.917461 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18fc032b-7957-4e94-929a-47c04d67b45f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.917504 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:01 crc kubenswrapper[4877]: I1211 18:16:01.917542 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.018555 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb62v\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-kube-api-access-vb62v\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.018606 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.018652 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-config-data\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.018674 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18fc032b-7957-4e94-929a-47c04d67b45f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.018695 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.018714 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.018741 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.018758 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.018779 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18fc032b-7957-4e94-929a-47c04d67b45f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.018806 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.018827 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.023885 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.024174 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.024518 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-config-data\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.024662 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.025071 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.025204 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.025986 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.036898 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.039865 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18fc032b-7957-4e94-929a-47c04d67b45f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.043747 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb62v\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-kube-api-access-vb62v\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.045285 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18fc032b-7957-4e94-929a-47c04d67b45f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.074445 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.234935 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.260569 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.262027 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.266715 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.266724 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.266920 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.266927 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.267261 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-k6bpv" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.267919 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.270939 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.282368 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.435530 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.435656 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.435681 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d003258a-8e88-4f72-b82b-2367c81bd081-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.435711 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d003258a-8e88-4f72-b82b-2367c81bd081-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.435731 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.435756 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.436534 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.436651 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.436712 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqf9z\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-kube-api-access-jqf9z\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.436739 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.436779 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.538317 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.538718 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d003258a-8e88-4f72-b82b-2367c81bd081-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.538771 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d003258a-8e88-4f72-b82b-2367c81bd081-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.538803 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.538832 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.538855 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.538882 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.538916 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqf9z\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-kube-api-access-jqf9z\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.538937 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.538962 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.539012 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.540297 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.541255 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.542241 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.542302 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.542519 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.543015 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.547151 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.548972 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.557507 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d003258a-8e88-4f72-b82b-2367c81bd081-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.558082 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d003258a-8e88-4f72-b82b-2367c81bd081-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.560916 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqf9z\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-kube-api-access-jqf9z\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.565931 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:02 crc kubenswrapper[4877]: I1211 18:16:02.582987 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.337143 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.339837 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.359129 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.360639 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.364074 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.365638 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5crq9" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.367406 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.372053 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.468576 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e433a730-179e-4edf-93a9-9468b1714468-kolla-config\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.468912 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e433a730-179e-4edf-93a9-9468b1714468-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.468996 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e433a730-179e-4edf-93a9-9468b1714468-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.469102 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrfqd\" (UniqueName: \"kubernetes.io/projected/e433a730-179e-4edf-93a9-9468b1714468-kube-api-access-lrfqd\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.469272 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.469680 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e433a730-179e-4edf-93a9-9468b1714468-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.469921 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e433a730-179e-4edf-93a9-9468b1714468-config-data-default\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.470049 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e433a730-179e-4edf-93a9-9468b1714468-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.572346 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e433a730-179e-4edf-93a9-9468b1714468-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.572849 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e433a730-179e-4edf-93a9-9468b1714468-kolla-config\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.572878 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e433a730-179e-4edf-93a9-9468b1714468-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.572908 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrfqd\" (UniqueName: \"kubernetes.io/projected/e433a730-179e-4edf-93a9-9468b1714468-kube-api-access-lrfqd\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.572928 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e433a730-179e-4edf-93a9-9468b1714468-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.572950 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.572983 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e433a730-179e-4edf-93a9-9468b1714468-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.573596 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e433a730-179e-4edf-93a9-9468b1714468-config-data-default\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.573739 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e433a730-179e-4edf-93a9-9468b1714468-kolla-config\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.573969 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.574564 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e433a730-179e-4edf-93a9-9468b1714468-config-data-default\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.575091 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e433a730-179e-4edf-93a9-9468b1714468-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.575784 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e433a730-179e-4edf-93a9-9468b1714468-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.581312 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e433a730-179e-4edf-93a9-9468b1714468-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.594597 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrfqd\" (UniqueName: \"kubernetes.io/projected/e433a730-179e-4edf-93a9-9468b1714468-kube-api-access-lrfqd\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.603543 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e433a730-179e-4edf-93a9-9468b1714468-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.612192 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"e433a730-179e-4edf-93a9-9468b1714468\") " pod="openstack/openstack-galera-0" Dec 11 18:16:03 crc kubenswrapper[4877]: I1211 18:16:03.705292 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.655340 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.658271 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.659597 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.661458 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-sk7pc" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.662459 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.662610 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.662958 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.797882 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.797956 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szvxt\" (UniqueName: \"kubernetes.io/projected/ce9369cc-7934-4f85-9d10-e89f50e28710-kube-api-access-szvxt\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.798011 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9369cc-7934-4f85-9d10-e89f50e28710-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.798036 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9369cc-7934-4f85-9d10-e89f50e28710-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.798082 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce9369cc-7934-4f85-9d10-e89f50e28710-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.798152 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce9369cc-7934-4f85-9d10-e89f50e28710-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.798177 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9369cc-7934-4f85-9d10-e89f50e28710-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.798226 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce9369cc-7934-4f85-9d10-e89f50e28710-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.880624 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.881977 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.894004 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6bjqn" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.894272 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.894352 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.899327 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce9369cc-7934-4f85-9d10-e89f50e28710-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.899432 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce9369cc-7934-4f85-9d10-e89f50e28710-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.899455 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9369cc-7934-4f85-9d10-e89f50e28710-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.899490 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce9369cc-7934-4f85-9d10-e89f50e28710-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.899526 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.899545 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szvxt\" (UniqueName: \"kubernetes.io/projected/ce9369cc-7934-4f85-9d10-e89f50e28710-kube-api-access-szvxt\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.899582 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9369cc-7934-4f85-9d10-e89f50e28710-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.899604 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9369cc-7934-4f85-9d10-e89f50e28710-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.899953 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce9369cc-7934-4f85-9d10-e89f50e28710-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.900541 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce9369cc-7934-4f85-9d10-e89f50e28710-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.900777 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce9369cc-7934-4f85-9d10-e89f50e28710-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.901623 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.903651 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce9369cc-7934-4f85-9d10-e89f50e28710-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.910933 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9369cc-7934-4f85-9d10-e89f50e28710-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.917297 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9369cc-7934-4f85-9d10-e89f50e28710-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.935456 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.939670 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szvxt\" (UniqueName: \"kubernetes.io/projected/ce9369cc-7934-4f85-9d10-e89f50e28710-kube-api-access-szvxt\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:04 crc kubenswrapper[4877]: I1211 18:16:04.951336 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ce9369cc-7934-4f85-9d10-e89f50e28710\") " pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.001974 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qmbm\" (UniqueName: \"kubernetes.io/projected/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-kube-api-access-4qmbm\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.002027 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.002137 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.002193 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-kolla-config\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.002219 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-config-data\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.015236 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.108061 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.108690 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-kolla-config\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.108725 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-config-data\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.108791 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qmbm\" (UniqueName: \"kubernetes.io/projected/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-kube-api-access-4qmbm\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.108815 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.109579 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-kolla-config\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.117745 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-config-data\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.123970 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.130068 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qmbm\" (UniqueName: \"kubernetes.io/projected/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-kube-api-access-4qmbm\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.131337 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9c80e6-7afc-495d-85c0-9c3b64d26df5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ee9c80e6-7afc-495d-85c0-9c3b64d26df5\") " pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.203932 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 18:16:05 crc kubenswrapper[4877]: I1211 18:16:05.293391 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" event={"ID":"e04dda18-fffa-489e-8c1f-b27530036418","Type":"ContainerStarted","Data":"d35b9751f218d032ccf6948eff42fb4a6b3a983aafeb3ca67aeccf9997e1a25c"} Dec 11 18:16:07 crc kubenswrapper[4877]: I1211 18:16:07.116746 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 18:16:07 crc kubenswrapper[4877]: I1211 18:16:07.119769 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 18:16:07 crc kubenswrapper[4877]: I1211 18:16:07.122831 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-qwtsk" Dec 11 18:16:07 crc kubenswrapper[4877]: I1211 18:16:07.134832 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 18:16:07 crc kubenswrapper[4877]: I1211 18:16:07.247484 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7mgt\" (UniqueName: \"kubernetes.io/projected/304062a6-e2be-499a-a93f-5f439a525e46-kube-api-access-x7mgt\") pod \"kube-state-metrics-0\" (UID: \"304062a6-e2be-499a-a93f-5f439a525e46\") " pod="openstack/kube-state-metrics-0" Dec 11 18:16:07 crc kubenswrapper[4877]: I1211 18:16:07.349940 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7mgt\" (UniqueName: \"kubernetes.io/projected/304062a6-e2be-499a-a93f-5f439a525e46-kube-api-access-x7mgt\") pod \"kube-state-metrics-0\" (UID: \"304062a6-e2be-499a-a93f-5f439a525e46\") " pod="openstack/kube-state-metrics-0" Dec 11 18:16:07 crc kubenswrapper[4877]: I1211 18:16:07.370919 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7mgt\" (UniqueName: \"kubernetes.io/projected/304062a6-e2be-499a-a93f-5f439a525e46-kube-api-access-x7mgt\") pod \"kube-state-metrics-0\" (UID: \"304062a6-e2be-499a-a93f-5f439a525e46\") " pod="openstack/kube-state-metrics-0" Dec 11 18:16:07 crc kubenswrapper[4877]: I1211 18:16:07.474924 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.436047 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zdz6c"] Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.443748 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.446167 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.446332 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-5tclt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.447068 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zdz6c"] Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.447719 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.503857 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-6g4mt"] Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.506500 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.532224 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.541530 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6g4mt"] Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.610526 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxkfx\" (UniqueName: \"kubernetes.io/projected/efc5ef2c-fcea-4de5-a085-47ff35a33522-kube-api-access-qxkfx\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.610595 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-var-log\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.610628 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-etc-ovs\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.610654 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-var-run\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.610693 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc5ef2c-fcea-4de5-a085-47ff35a33522-combined-ca-bundle\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.610725 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/efc5ef2c-fcea-4de5-a085-47ff35a33522-var-run-ovn\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.610750 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/efc5ef2c-fcea-4de5-a085-47ff35a33522-var-log-ovn\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.610779 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc5ef2c-fcea-4de5-a085-47ff35a33522-ovn-controller-tls-certs\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.610822 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-scripts\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.610855 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvsph\" (UniqueName: \"kubernetes.io/projected/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-kube-api-access-zvsph\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.610890 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-var-lib\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.610917 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efc5ef2c-fcea-4de5-a085-47ff35a33522-scripts\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.610954 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/efc5ef2c-fcea-4de5-a085-47ff35a33522-var-run\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.712254 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-var-lib\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.712325 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efc5ef2c-fcea-4de5-a085-47ff35a33522-scripts\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.712366 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/efc5ef2c-fcea-4de5-a085-47ff35a33522-var-run\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.713029 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxkfx\" (UniqueName: \"kubernetes.io/projected/efc5ef2c-fcea-4de5-a085-47ff35a33522-kube-api-access-qxkfx\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.713064 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-var-log\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.713085 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-etc-ovs\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.713106 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-var-run\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.713131 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc5ef2c-fcea-4de5-a085-47ff35a33522-combined-ca-bundle\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.713157 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/efc5ef2c-fcea-4de5-a085-47ff35a33522-var-run-ovn\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.713185 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/efc5ef2c-fcea-4de5-a085-47ff35a33522-var-log-ovn\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.713195 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-var-lib\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.713212 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc5ef2c-fcea-4de5-a085-47ff35a33522-ovn-controller-tls-certs\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.713314 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/efc5ef2c-fcea-4de5-a085-47ff35a33522-var-run\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.714009 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/efc5ef2c-fcea-4de5-a085-47ff35a33522-var-run-ovn\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.714030 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-var-run\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.714336 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-var-log\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.714359 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-etc-ovs\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.714430 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-scripts\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.714462 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvsph\" (UniqueName: \"kubernetes.io/projected/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-kube-api-access-zvsph\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.714584 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/efc5ef2c-fcea-4de5-a085-47ff35a33522-var-log-ovn\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.715446 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efc5ef2c-fcea-4de5-a085-47ff35a33522-scripts\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.718096 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-scripts\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.728151 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc5ef2c-fcea-4de5-a085-47ff35a33522-combined-ca-bundle\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.728939 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc5ef2c-fcea-4de5-a085-47ff35a33522-ovn-controller-tls-certs\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.735879 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxkfx\" (UniqueName: \"kubernetes.io/projected/efc5ef2c-fcea-4de5-a085-47ff35a33522-kube-api-access-qxkfx\") pod \"ovn-controller-zdz6c\" (UID: \"efc5ef2c-fcea-4de5-a085-47ff35a33522\") " pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.746754 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvsph\" (UniqueName: \"kubernetes.io/projected/2cfb93fc-8582-42dd-8c57-afd3fcd25b40-kube-api-access-zvsph\") pod \"ovn-controller-ovs-6g4mt\" (UID: \"2cfb93fc-8582-42dd-8c57-afd3fcd25b40\") " pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.780931 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:10 crc kubenswrapper[4877]: I1211 18:16:10.833174 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.320233 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.322808 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.327831 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-h7v2f" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.329637 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.330456 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.330766 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.332461 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.340466 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.423694 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b163a060-e7a9-4e81-992b-a9c72bbac544-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.423758 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b163a060-e7a9-4e81-992b-a9c72bbac544-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.423826 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.423863 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b163a060-e7a9-4e81-992b-a9c72bbac544-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.423884 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b163a060-e7a9-4e81-992b-a9c72bbac544-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.423913 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b163a060-e7a9-4e81-992b-a9c72bbac544-config\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.423935 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b163a060-e7a9-4e81-992b-a9c72bbac544-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.423959 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxf8x\" (UniqueName: \"kubernetes.io/projected/b163a060-e7a9-4e81-992b-a9c72bbac544-kube-api-access-hxf8x\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.525760 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxf8x\" (UniqueName: \"kubernetes.io/projected/b163a060-e7a9-4e81-992b-a9c72bbac544-kube-api-access-hxf8x\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.525854 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b163a060-e7a9-4e81-992b-a9c72bbac544-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.525908 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b163a060-e7a9-4e81-992b-a9c72bbac544-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.525997 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.526084 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b163a060-e7a9-4e81-992b-a9c72bbac544-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.526110 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b163a060-e7a9-4e81-992b-a9c72bbac544-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.526163 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b163a060-e7a9-4e81-992b-a9c72bbac544-config\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.526186 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b163a060-e7a9-4e81-992b-a9c72bbac544-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.526851 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.531914 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b163a060-e7a9-4e81-992b-a9c72bbac544-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.532262 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b163a060-e7a9-4e81-992b-a9c72bbac544-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.546480 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b163a060-e7a9-4e81-992b-a9c72bbac544-config\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.553669 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b163a060-e7a9-4e81-992b-a9c72bbac544-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.590925 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.594475 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b163a060-e7a9-4e81-992b-a9c72bbac544-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.603079 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b163a060-e7a9-4e81-992b-a9c72bbac544-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.605257 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxf8x\" (UniqueName: \"kubernetes.io/projected/b163a060-e7a9-4e81-992b-a9c72bbac544-kube-api-access-hxf8x\") pod \"ovsdbserver-nb-0\" (UID: \"b163a060-e7a9-4e81-992b-a9c72bbac544\") " pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:11 crc kubenswrapper[4877]: I1211 18:16:11.650963 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.221791 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.223985 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.227195 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.228718 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.228735 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.241722 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.242522 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-b47q5" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.388282 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.388347 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.388369 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.388404 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pqtq\" (UniqueName: \"kubernetes.io/projected/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-kube-api-access-9pqtq\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.388449 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.388482 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.388532 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.388641 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-config\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.490800 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-config\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.490884 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.490921 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.490946 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.490967 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pqtq\" (UniqueName: \"kubernetes.io/projected/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-kube-api-access-9pqtq\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.491019 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.491059 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.491108 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.492476 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.493639 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.495527 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-config\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.498262 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.502511 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.510153 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.512229 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.524805 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pqtq\" (UniqueName: \"kubernetes.io/projected/d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde-kube-api-access-9pqtq\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.562490 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde\") " pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:14 crc kubenswrapper[4877]: I1211 18:16:14.858017 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:16 crc kubenswrapper[4877]: I1211 18:16:16.111810 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 18:16:16 crc kubenswrapper[4877]: I1211 18:16:16.429456 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d003258a-8e88-4f72-b82b-2367c81bd081","Type":"ContainerStarted","Data":"45484498c33bf23d3103ecc873c2ca14818178492ac49413a37e4230a46c4d1f"} Dec 11 18:16:16 crc kubenswrapper[4877]: W1211 18:16:16.592041 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18fc032b_7957_4e94_929a_47c04d67b45f.slice/crio-b94abeb534efe4a49d3bf21daed5bad18704205916df51ab6bbf24fad91654c6 WatchSource:0}: Error finding container b94abeb534efe4a49d3bf21daed5bad18704205916df51ab6bbf24fad91654c6: Status 404 returned error can't find the container with id b94abeb534efe4a49d3bf21daed5bad18704205916df51ab6bbf24fad91654c6 Dec 11 18:16:16 crc kubenswrapper[4877]: I1211 18:16:16.637828 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:16:16 crc kubenswrapper[4877]: I1211 18:16:16.637921 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:16:16 crc kubenswrapper[4877]: E1211 18:16:16.641560 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 11 18:16:16 crc kubenswrapper[4877]: E1211 18:16:16.641756 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6nhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8nn8w_openstack(7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:16:16 crc kubenswrapper[4877]: E1211 18:16:16.643021 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8nn8w" podUID="7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a" Dec 11 18:16:16 crc kubenswrapper[4877]: E1211 18:16:16.674286 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 11 18:16:16 crc kubenswrapper[4877]: E1211 18:16:16.674672 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gs5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-5s4n9_openstack(11bcf070-144c-4f9e-bdec-e7e72a34be60): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:16:16 crc kubenswrapper[4877]: E1211 18:16:16.676027 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" podUID="11bcf070-144c-4f9e-bdec-e7e72a34be60" Dec 11 18:16:17 crc kubenswrapper[4877]: I1211 18:16:17.094892 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 18:16:17 crc kubenswrapper[4877]: I1211 18:16:17.300624 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 11 18:16:17 crc kubenswrapper[4877]: W1211 18:16:17.309835 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee9c80e6_7afc_495d_85c0_9c3b64d26df5.slice/crio-cee5bc0defa3ecb22f9be1486890095db5e3e16601ae2eb0ce599fde954a4e09 WatchSource:0}: Error finding container cee5bc0defa3ecb22f9be1486890095db5e3e16601ae2eb0ce599fde954a4e09: Status 404 returned error can't find the container with id cee5bc0defa3ecb22f9be1486890095db5e3e16601ae2eb0ce599fde954a4e09 Dec 11 18:16:17 crc kubenswrapper[4877]: I1211 18:16:17.459685 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 18:16:17 crc kubenswrapper[4877]: I1211 18:16:17.462049 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e433a730-179e-4edf-93a9-9468b1714468","Type":"ContainerStarted","Data":"ad8daf6d3138cf16f7fad279265006092b9d73219cefe076eeb12123499d75de"} Dec 11 18:16:17 crc kubenswrapper[4877]: I1211 18:16:17.466790 4877 generic.go:334] "Generic (PLEG): container finished" podID="e04dda18-fffa-489e-8c1f-b27530036418" containerID="821a264dc124eed1558ca3570b111126366823d123e633a34690ba88202b34a1" exitCode=0 Dec 11 18:16:17 crc kubenswrapper[4877]: I1211 18:16:17.466931 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" event={"ID":"e04dda18-fffa-489e-8c1f-b27530036418","Type":"ContainerDied","Data":"821a264dc124eed1558ca3570b111126366823d123e633a34690ba88202b34a1"} Dec 11 18:16:17 crc kubenswrapper[4877]: W1211 18:16:17.468103 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefc5ef2c_fcea_4de5_a085_47ff35a33522.slice/crio-23440b32dae7b45c95d342d6f1c74f28af9271605a1e81a1baf599b07765b80e WatchSource:0}: Error finding container 23440b32dae7b45c95d342d6f1c74f28af9271605a1e81a1baf599b07765b80e: Status 404 returned error can't find the container with id 23440b32dae7b45c95d342d6f1c74f28af9271605a1e81a1baf599b07765b80e Dec 11 18:16:17 crc kubenswrapper[4877]: W1211 18:16:17.469097 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce9369cc_7934_4f85_9d10_e89f50e28710.slice/crio-7e1ebe73a95134847e4d747755c0753bb03f01d0679adabcb76319d7776653c6 WatchSource:0}: Error finding container 7e1ebe73a95134847e4d747755c0753bb03f01d0679adabcb76319d7776653c6: Status 404 returned error can't find the container with id 7e1ebe73a95134847e4d747755c0753bb03f01d0679adabcb76319d7776653c6 Dec 11 18:16:17 crc kubenswrapper[4877]: I1211 18:16:17.472406 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18fc032b-7957-4e94-929a-47c04d67b45f","Type":"ContainerStarted","Data":"b94abeb534efe4a49d3bf21daed5bad18704205916df51ab6bbf24fad91654c6"} Dec 11 18:16:17 crc kubenswrapper[4877]: I1211 18:16:17.477705 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zdz6c"] Dec 11 18:16:17 crc kubenswrapper[4877]: I1211 18:16:17.483241 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ee9c80e6-7afc-495d-85c0-9c3b64d26df5","Type":"ContainerStarted","Data":"cee5bc0defa3ecb22f9be1486890095db5e3e16601ae2eb0ce599fde954a4e09"} Dec 11 18:16:17 crc kubenswrapper[4877]: I1211 18:16:17.516036 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 18:16:17 crc kubenswrapper[4877]: I1211 18:16:17.551867 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-j9rql"] Dec 11 18:16:17 crc kubenswrapper[4877]: I1211 18:16:17.604662 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 18:16:17 crc kubenswrapper[4877]: W1211 18:16:17.609482 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb163a060_e7a9_4e81_992b_a9c72bbac544.slice/crio-4bdb83e573bf92bbb908f0b895c5deb435b35e82ce5279e5e2a969fc12e6a654 WatchSource:0}: Error finding container 4bdb83e573bf92bbb908f0b895c5deb435b35e82ce5279e5e2a969fc12e6a654: Status 404 returned error can't find the container with id 4bdb83e573bf92bbb908f0b895c5deb435b35e82ce5279e5e2a969fc12e6a654 Dec 11 18:16:17 crc kubenswrapper[4877]: I1211 18:16:17.819209 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 18:16:17 crc kubenswrapper[4877]: E1211 18:16:17.937877 4877 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 11 18:16:17 crc kubenswrapper[4877]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/e04dda18-fffa-489e-8c1f-b27530036418/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 11 18:16:17 crc kubenswrapper[4877]: > podSandboxID="d35b9751f218d032ccf6948eff42fb4a6b3a983aafeb3ca67aeccf9997e1a25c" Dec 11 18:16:17 crc kubenswrapper[4877]: E1211 18:16:17.938536 4877 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 11 18:16:17 crc kubenswrapper[4877]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcvgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-cp8fq_openstack(e04dda18-fffa-489e-8c1f-b27530036418): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/e04dda18-fffa-489e-8c1f-b27530036418/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 11 18:16:17 crc kubenswrapper[4877]: > logger="UnhandledError" Dec 11 18:16:17 crc kubenswrapper[4877]: E1211 18:16:17.940206 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/e04dda18-fffa-489e-8c1f-b27530036418/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" podUID="e04dda18-fffa-489e-8c1f-b27530036418" Dec 11 18:16:17 crc kubenswrapper[4877]: I1211 18:16:17.946493 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6g4mt"] Dec 11 18:16:18 crc kubenswrapper[4877]: W1211 18:16:18.034629 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cfb93fc_8582_42dd_8c57_afd3fcd25b40.slice/crio-13c622a71a19459ea8e785b2e4d2f81c78e804245e9929fc7e7a36ca7fc55af0 WatchSource:0}: Error finding container 13c622a71a19459ea8e785b2e4d2f81c78e804245e9929fc7e7a36ca7fc55af0: Status 404 returned error can't find the container with id 13c622a71a19459ea8e785b2e4d2f81c78e804245e9929fc7e7a36ca7fc55af0 Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.035883 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.071323 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8nn8w" Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.185418 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11bcf070-144c-4f9e-bdec-e7e72a34be60-config\") pod \"11bcf070-144c-4f9e-bdec-e7e72a34be60\" (UID: \"11bcf070-144c-4f9e-bdec-e7e72a34be60\") " Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.185495 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11bcf070-144c-4f9e-bdec-e7e72a34be60-dns-svc\") pod \"11bcf070-144c-4f9e-bdec-e7e72a34be60\" (UID: \"11bcf070-144c-4f9e-bdec-e7e72a34be60\") " Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.185539 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6nhq\" (UniqueName: \"kubernetes.io/projected/7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a-kube-api-access-j6nhq\") pod \"7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a\" (UID: \"7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a\") " Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.185562 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a-config\") pod \"7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a\" (UID: \"7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a\") " Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.185715 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gs5s\" (UniqueName: \"kubernetes.io/projected/11bcf070-144c-4f9e-bdec-e7e72a34be60-kube-api-access-6gs5s\") pod \"11bcf070-144c-4f9e-bdec-e7e72a34be60\" (UID: \"11bcf070-144c-4f9e-bdec-e7e72a34be60\") " Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.186277 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a-config" (OuterVolumeSpecName: "config") pod "7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a" (UID: "7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.186279 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11bcf070-144c-4f9e-bdec-e7e72a34be60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11bcf070-144c-4f9e-bdec-e7e72a34be60" (UID: "11bcf070-144c-4f9e-bdec-e7e72a34be60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.186301 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11bcf070-144c-4f9e-bdec-e7e72a34be60-config" (OuterVolumeSpecName: "config") pod "11bcf070-144c-4f9e-bdec-e7e72a34be60" (UID: "11bcf070-144c-4f9e-bdec-e7e72a34be60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.192125 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a-kube-api-access-j6nhq" (OuterVolumeSpecName: "kube-api-access-j6nhq") pod "7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a" (UID: "7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a"). InnerVolumeSpecName "kube-api-access-j6nhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.192472 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11bcf070-144c-4f9e-bdec-e7e72a34be60-kube-api-access-6gs5s" (OuterVolumeSpecName: "kube-api-access-6gs5s") pod "11bcf070-144c-4f9e-bdec-e7e72a34be60" (UID: "11bcf070-144c-4f9e-bdec-e7e72a34be60"). InnerVolumeSpecName "kube-api-access-6gs5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.288123 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gs5s\" (UniqueName: \"kubernetes.io/projected/11bcf070-144c-4f9e-bdec-e7e72a34be60-kube-api-access-6gs5s\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.288164 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11bcf070-144c-4f9e-bdec-e7e72a34be60-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.288174 4877 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11bcf070-144c-4f9e-bdec-e7e72a34be60-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.288184 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6nhq\" (UniqueName: \"kubernetes.io/projected/7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a-kube-api-access-j6nhq\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.288231 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.493799 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b163a060-e7a9-4e81-992b-a9c72bbac544","Type":"ContainerStarted","Data":"4bdb83e573bf92bbb908f0b895c5deb435b35e82ce5279e5e2a969fc12e6a654"} Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.496595 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" event={"ID":"11bcf070-144c-4f9e-bdec-e7e72a34be60","Type":"ContainerDied","Data":"f33686c97a4ade1b451366d5df359de9440bf720f0067beb41360ce045a2e43d"} Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.496621 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5s4n9" Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.498602 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde","Type":"ContainerStarted","Data":"33073103a31cfd1c9b86bd4d00c0dd4b6d5f02099953edbd4a4d42a02eea3444"} Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.499827 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6g4mt" event={"ID":"2cfb93fc-8582-42dd-8c57-afd3fcd25b40","Type":"ContainerStarted","Data":"13c622a71a19459ea8e785b2e4d2f81c78e804245e9929fc7e7a36ca7fc55af0"} Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.504690 4877 generic.go:334] "Generic (PLEG): container finished" podID="d4d97300-ea2c-4e95-86ae-061bc789ab6d" containerID="e4680fc4c678938621030bdd7c639d14d6fa6aaa90981e326ccfb442f4ebbab8" exitCode=0 Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.504795 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" event={"ID":"d4d97300-ea2c-4e95-86ae-061bc789ab6d","Type":"ContainerDied","Data":"e4680fc4c678938621030bdd7c639d14d6fa6aaa90981e326ccfb442f4ebbab8"} Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.504857 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" event={"ID":"d4d97300-ea2c-4e95-86ae-061bc789ab6d","Type":"ContainerStarted","Data":"ff2ac50430f1d29c670bee61720efda2b01c1898116ddc08803811b6de630817"} Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.507531 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zdz6c" event={"ID":"efc5ef2c-fcea-4de5-a085-47ff35a33522","Type":"ContainerStarted","Data":"23440b32dae7b45c95d342d6f1c74f28af9271605a1e81a1baf599b07765b80e"} Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.509460 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"304062a6-e2be-499a-a93f-5f439a525e46","Type":"ContainerStarted","Data":"c01780141097dc473143351a6aeade81b11315268122a8fa954bb9212170e345"} Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.516703 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8nn8w" Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.516699 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8nn8w" event={"ID":"7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a","Type":"ContainerDied","Data":"6ace50ab1ee0e625c8773f88400e367dc3961ceb306ed8359f9f912706d1f681"} Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.529157 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ce9369cc-7934-4f85-9d10-e89f50e28710","Type":"ContainerStarted","Data":"7e1ebe73a95134847e4d747755c0753bb03f01d0679adabcb76319d7776653c6"} Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.587908 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5s4n9"] Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.596076 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5s4n9"] Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.616938 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8nn8w"] Dec 11 18:16:18 crc kubenswrapper[4877]: I1211 18:16:18.626265 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8nn8w"] Dec 11 18:16:19 crc kubenswrapper[4877]: I1211 18:16:19.235458 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11bcf070-144c-4f9e-bdec-e7e72a34be60" path="/var/lib/kubelet/pods/11bcf070-144c-4f9e-bdec-e7e72a34be60/volumes" Dec 11 18:16:19 crc kubenswrapper[4877]: I1211 18:16:19.237680 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a" path="/var/lib/kubelet/pods/7b37cd21-2c17-4f1b-af04-2f0f4dcf9c9a/volumes" Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.616126 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e433a730-179e-4edf-93a9-9468b1714468","Type":"ContainerStarted","Data":"502ff2ce605e92f127837b33893a6ea14b44e0c998dd793a69f40f6ec793a8d9"} Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.620990 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" event={"ID":"e04dda18-fffa-489e-8c1f-b27530036418","Type":"ContainerStarted","Data":"daf4fac3a360346e2390997935fe4df4f964311b735bb358f97359b6837f3776"} Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.621461 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.627857 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"304062a6-e2be-499a-a93f-5f439a525e46","Type":"ContainerStarted","Data":"bcbcde474e4085450fc15ea92728b34d62f5d0cd35c19f99ace27a2b507b33de"} Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.628002 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.630559 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6g4mt" event={"ID":"2cfb93fc-8582-42dd-8c57-afd3fcd25b40","Type":"ContainerStarted","Data":"42deb42d048fa76893c3b19bd1f182777cc7c2c9ae6f32fe9640043fd33c8f5c"} Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.635082 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ce9369cc-7934-4f85-9d10-e89f50e28710","Type":"ContainerStarted","Data":"9717836fa55b3ff11cae143142de8bc0de24f3c09d305f4b4b38f2eaf3e645a8"} Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.642975 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" event={"ID":"d4d97300-ea2c-4e95-86ae-061bc789ab6d","Type":"ContainerStarted","Data":"2fc8f67c133300180ace1ecedd757063b95474a047e58ffd1ef384c720095c3c"} Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.644190 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.649291 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b163a060-e7a9-4e81-992b-a9c72bbac544","Type":"ContainerStarted","Data":"34488e2c8bfa13b9f04a915af1f780c7c72e8e2c84b540940da31d716b859255"} Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.654025 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zdz6c" event={"ID":"efc5ef2c-fcea-4de5-a085-47ff35a33522","Type":"ContainerStarted","Data":"35cb3fec30e9568a903a2a267226718eed352583ca18ff243ea356d36abab6ca"} Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.654229 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-zdz6c" Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.656281 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ee9c80e6-7afc-495d-85c0-9c3b64d26df5","Type":"ContainerStarted","Data":"f1b2ea84529680e26489ffd9764804ecbf003783b7476368d649ab5421030b39"} Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.656442 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.661984 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde","Type":"ContainerStarted","Data":"bd428cdf737e31bbe6c4a2d1dbaa869493a615846263ad3ebf164bdeab001192"} Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.693226 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" podStartSLOduration=15.235529428 podStartE2EDuration="27.693202258s" podCreationTimestamp="2025-12-11 18:16:00 +0000 UTC" firstStartedPulling="2025-12-11 18:16:04.356039425 +0000 UTC m=+925.382283469" lastFinishedPulling="2025-12-11 18:16:16.813712255 +0000 UTC m=+937.839956299" observedRunningTime="2025-12-11 18:16:27.686441617 +0000 UTC m=+948.712685671" watchObservedRunningTime="2025-12-11 18:16:27.693202258 +0000 UTC m=+948.719446312" Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.737532 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=11.623510704 podStartE2EDuration="20.73750021s" podCreationTimestamp="2025-12-11 18:16:07 +0000 UTC" firstStartedPulling="2025-12-11 18:16:17.537140202 +0000 UTC m=+938.563384246" lastFinishedPulling="2025-12-11 18:16:26.651129708 +0000 UTC m=+947.677373752" observedRunningTime="2025-12-11 18:16:27.728096579 +0000 UTC m=+948.754340643" watchObservedRunningTime="2025-12-11 18:16:27.73750021 +0000 UTC m=+948.763744264" Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.756136 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zdz6c" podStartSLOduration=9.271986519 podStartE2EDuration="17.756108687s" podCreationTimestamp="2025-12-11 18:16:10 +0000 UTC" firstStartedPulling="2025-12-11 18:16:17.483550921 +0000 UTC m=+938.509794965" lastFinishedPulling="2025-12-11 18:16:25.967673089 +0000 UTC m=+946.993917133" observedRunningTime="2025-12-11 18:16:27.746196282 +0000 UTC m=+948.772440336" watchObservedRunningTime="2025-12-11 18:16:27.756108687 +0000 UTC m=+948.782352731" Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.766270 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" podStartSLOduration=26.766241897 podStartE2EDuration="26.766241897s" podCreationTimestamp="2025-12-11 18:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:16:27.764929542 +0000 UTC m=+948.791173586" watchObservedRunningTime="2025-12-11 18:16:27.766241897 +0000 UTC m=+948.792485941" Dec 11 18:16:27 crc kubenswrapper[4877]: I1211 18:16:27.795601 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.563335534 podStartE2EDuration="23.7955787s" podCreationTimestamp="2025-12-11 18:16:04 +0000 UTC" firstStartedPulling="2025-12-11 18:16:17.314290244 +0000 UTC m=+938.340534288" lastFinishedPulling="2025-12-11 18:16:25.54653337 +0000 UTC m=+946.572777454" observedRunningTime="2025-12-11 18:16:27.788891061 +0000 UTC m=+948.815135125" watchObservedRunningTime="2025-12-11 18:16:27.7955787 +0000 UTC m=+948.821822764" Dec 11 18:16:28 crc kubenswrapper[4877]: I1211 18:16:28.673138 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18fc032b-7957-4e94-929a-47c04d67b45f","Type":"ContainerStarted","Data":"3baa3250be1097a6dbd911daa5681626082cfbbb7bd7aa304ec88f6703b2465b"} Dec 11 18:16:28 crc kubenswrapper[4877]: I1211 18:16:28.675513 4877 generic.go:334] "Generic (PLEG): container finished" podID="2cfb93fc-8582-42dd-8c57-afd3fcd25b40" containerID="42deb42d048fa76893c3b19bd1f182777cc7c2c9ae6f32fe9640043fd33c8f5c" exitCode=0 Dec 11 18:16:28 crc kubenswrapper[4877]: I1211 18:16:28.675597 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6g4mt" event={"ID":"2cfb93fc-8582-42dd-8c57-afd3fcd25b40","Type":"ContainerDied","Data":"42deb42d048fa76893c3b19bd1f182777cc7c2c9ae6f32fe9640043fd33c8f5c"} Dec 11 18:16:28 crc kubenswrapper[4877]: I1211 18:16:28.680243 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d003258a-8e88-4f72-b82b-2367c81bd081","Type":"ContainerStarted","Data":"95c675d59aed12e33d0706bc4e929f1ce4eddd8ca6bfbbd7d02db9b1d954eee9"} Dec 11 18:16:29 crc kubenswrapper[4877]: I1211 18:16:29.688046 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6g4mt" event={"ID":"2cfb93fc-8582-42dd-8c57-afd3fcd25b40","Type":"ContainerStarted","Data":"5c0961ecc6fb750c89b3627bb464676c1ce87782b00723e8edc15aa2ec1d8534"} Dec 11 18:16:29 crc kubenswrapper[4877]: I1211 18:16:29.688315 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6g4mt" event={"ID":"2cfb93fc-8582-42dd-8c57-afd3fcd25b40","Type":"ContainerStarted","Data":"dbf1d7dc691e8216d8f034db32e79933d0082bad6cf8610361b0e68151ac2425"} Dec 11 18:16:29 crc kubenswrapper[4877]: I1211 18:16:29.708336 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-6g4mt" podStartSLOduration=11.866749337 podStartE2EDuration="19.708315156s" podCreationTimestamp="2025-12-11 18:16:10 +0000 UTC" firstStartedPulling="2025-12-11 18:16:18.047398729 +0000 UTC m=+939.073642773" lastFinishedPulling="2025-12-11 18:16:25.888964538 +0000 UTC m=+946.915208592" observedRunningTime="2025-12-11 18:16:29.705759558 +0000 UTC m=+950.732003602" watchObservedRunningTime="2025-12-11 18:16:29.708315156 +0000 UTC m=+950.734559220" Dec 11 18:16:30 crc kubenswrapper[4877]: I1211 18:16:30.698432 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:30 crc kubenswrapper[4877]: I1211 18:16:30.698976 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:16:36 crc kubenswrapper[4877]: I1211 18:16:34.755867 4877 generic.go:334] "Generic (PLEG): container finished" podID="e433a730-179e-4edf-93a9-9468b1714468" containerID="502ff2ce605e92f127837b33893a6ea14b44e0c998dd793a69f40f6ec793a8d9" exitCode=0 Dec 11 18:16:36 crc kubenswrapper[4877]: I1211 18:16:34.756002 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e433a730-179e-4edf-93a9-9468b1714468","Type":"ContainerDied","Data":"502ff2ce605e92f127837b33893a6ea14b44e0c998dd793a69f40f6ec793a8d9"} Dec 11 18:16:36 crc kubenswrapper[4877]: I1211 18:16:35.205987 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 11 18:16:36 crc kubenswrapper[4877]: I1211 18:16:35.767811 4877 generic.go:334] "Generic (PLEG): container finished" podID="ce9369cc-7934-4f85-9d10-e89f50e28710" containerID="9717836fa55b3ff11cae143142de8bc0de24f3c09d305f4b4b38f2eaf3e645a8" exitCode=0 Dec 11 18:16:36 crc kubenswrapper[4877]: I1211 18:16:35.767860 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ce9369cc-7934-4f85-9d10-e89f50e28710","Type":"ContainerDied","Data":"9717836fa55b3ff11cae143142de8bc0de24f3c09d305f4b4b38f2eaf3e645a8"} Dec 11 18:16:36 crc kubenswrapper[4877]: I1211 18:16:35.956684 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:36 crc kubenswrapper[4877]: I1211 18:16:36.410597 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:36 crc kubenswrapper[4877]: I1211 18:16:36.493946 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-cp8fq"] Dec 11 18:16:36 crc kubenswrapper[4877]: I1211 18:16:36.780179 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" podUID="e04dda18-fffa-489e-8c1f-b27530036418" containerName="dnsmasq-dns" containerID="cri-o://daf4fac3a360346e2390997935fe4df4f964311b735bb358f97359b6837f3776" gracePeriod=10 Dec 11 18:16:37 crc kubenswrapper[4877]: I1211 18:16:37.486197 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nfntl"] Dec 11 18:16:37 crc kubenswrapper[4877]: I1211 18:16:37.487578 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:16:37 crc kubenswrapper[4877]: I1211 18:16:37.489042 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 11 18:16:37 crc kubenswrapper[4877]: I1211 18:16:37.506940 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nfntl"] Dec 11 18:16:37 crc kubenswrapper[4877]: I1211 18:16:37.556723 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlkbq\" (UniqueName: \"kubernetes.io/projected/c7a32890-1340-4640-b192-526f6c90e72e-kube-api-access-dlkbq\") pod \"dnsmasq-dns-7cb5889db5-nfntl\" (UID: \"c7a32890-1340-4640-b192-526f6c90e72e\") " pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:16:37 crc kubenswrapper[4877]: I1211 18:16:37.556822 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7a32890-1340-4640-b192-526f6c90e72e-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-nfntl\" (UID: \"c7a32890-1340-4640-b192-526f6c90e72e\") " pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:16:37 crc kubenswrapper[4877]: I1211 18:16:37.556851 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a32890-1340-4640-b192-526f6c90e72e-config\") pod \"dnsmasq-dns-7cb5889db5-nfntl\" (UID: \"c7a32890-1340-4640-b192-526f6c90e72e\") " pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:16:37 crc kubenswrapper[4877]: I1211 18:16:37.658116 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7a32890-1340-4640-b192-526f6c90e72e-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-nfntl\" (UID: \"c7a32890-1340-4640-b192-526f6c90e72e\") " pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:16:37 crc kubenswrapper[4877]: I1211 18:16:37.658182 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a32890-1340-4640-b192-526f6c90e72e-config\") pod \"dnsmasq-dns-7cb5889db5-nfntl\" (UID: \"c7a32890-1340-4640-b192-526f6c90e72e\") " pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:16:37 crc kubenswrapper[4877]: I1211 18:16:37.658254 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlkbq\" (UniqueName: \"kubernetes.io/projected/c7a32890-1340-4640-b192-526f6c90e72e-kube-api-access-dlkbq\") pod \"dnsmasq-dns-7cb5889db5-nfntl\" (UID: \"c7a32890-1340-4640-b192-526f6c90e72e\") " pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:16:37 crc kubenswrapper[4877]: I1211 18:16:37.659198 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7a32890-1340-4640-b192-526f6c90e72e-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-nfntl\" (UID: \"c7a32890-1340-4640-b192-526f6c90e72e\") " pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:16:37 crc kubenswrapper[4877]: I1211 18:16:37.659198 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a32890-1340-4640-b192-526f6c90e72e-config\") pod \"dnsmasq-dns-7cb5889db5-nfntl\" (UID: \"c7a32890-1340-4640-b192-526f6c90e72e\") " pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:16:37 crc kubenswrapper[4877]: I1211 18:16:37.680580 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlkbq\" (UniqueName: \"kubernetes.io/projected/c7a32890-1340-4640-b192-526f6c90e72e-kube-api-access-dlkbq\") pod \"dnsmasq-dns-7cb5889db5-nfntl\" (UID: \"c7a32890-1340-4640-b192-526f6c90e72e\") " pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:16:37 crc kubenswrapper[4877]: I1211 18:16:37.811025 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.503906 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.511596 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.514993 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.515417 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-k6vcj" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.515418 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.515792 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.530650 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.703785 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c6eb39a0-5f8c-44d1-b27e-c946c850a539-cache\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.703892 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.703961 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c6eb39a0-5f8c-44d1-b27e-c946c850a539-lock\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.704066 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92prj\" (UniqueName: \"kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-kube-api-access-92prj\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.704590 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.799488 4877 generic.go:334] "Generic (PLEG): container finished" podID="e04dda18-fffa-489e-8c1f-b27530036418" containerID="daf4fac3a360346e2390997935fe4df4f964311b735bb358f97359b6837f3776" exitCode=0 Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.799539 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" event={"ID":"e04dda18-fffa-489e-8c1f-b27530036418","Type":"ContainerDied","Data":"daf4fac3a360346e2390997935fe4df4f964311b735bb358f97359b6837f3776"} Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.806635 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.806712 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c6eb39a0-5f8c-44d1-b27e-c946c850a539-cache\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.806751 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.806791 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c6eb39a0-5f8c-44d1-b27e-c946c850a539-lock\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.806808 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92prj\" (UniqueName: \"kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-kube-api-access-92prj\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: E1211 18:16:38.807069 4877 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 18:16:38 crc kubenswrapper[4877]: E1211 18:16:38.807113 4877 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.807069 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: E1211 18:16:38.807188 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift podName:c6eb39a0-5f8c-44d1-b27e-c946c850a539 nodeName:}" failed. No retries permitted until 2025-12-11 18:16:39.307167498 +0000 UTC m=+960.333411542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift") pod "swift-storage-0" (UID: "c6eb39a0-5f8c-44d1-b27e-c946c850a539") : configmap "swift-ring-files" not found Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.807337 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c6eb39a0-5f8c-44d1-b27e-c946c850a539-cache\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.807723 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c6eb39a0-5f8c-44d1-b27e-c946c850a539-lock\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.838512 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92prj\" (UniqueName: \"kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-kube-api-access-92prj\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:38 crc kubenswrapper[4877]: I1211 18:16:38.842656 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.317677 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:39 crc kubenswrapper[4877]: E1211 18:16:39.318025 4877 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 18:16:39 crc kubenswrapper[4877]: E1211 18:16:39.321641 4877 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 18:16:39 crc kubenswrapper[4877]: E1211 18:16:39.321788 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift podName:c6eb39a0-5f8c-44d1-b27e-c946c850a539 nodeName:}" failed. No retries permitted until 2025-12-11 18:16:40.321732851 +0000 UTC m=+961.347976895 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift") pod "swift-storage-0" (UID: "c6eb39a0-5f8c-44d1-b27e-c946c850a539") : configmap "swift-ring-files" not found Dec 11 18:16:39 crc kubenswrapper[4877]: E1211 18:16:39.422163 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Dec 11 18:16:39 crc kubenswrapper[4877]: E1211 18:16:39.423876 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n54h4h8bh68h54dh88h64bh64ch64bh8fh5bch595h5fdh86h579hc7h96hdch684h58bh656h5f7h76h668hdfh79h557h5bdhb5h689h97h667q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pqtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.424673 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:39 crc kubenswrapper[4877]: E1211 18:16:39.425353 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde" Dec 11 18:16:39 crc kubenswrapper[4877]: E1211 18:16:39.440781 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Dec 11 18:16:39 crc kubenswrapper[4877]: E1211 18:16:39.441795 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:nc5hc7h99h677hf6h58hf4h5d4h549h67bh549h68ch5bch9ch5bch8h99h5b9h655h59fh667h9fh678h9ch556h59ch7fh664h548hb5h55ch5cbq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxf8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(b163a060-e7a9-4e81-992b-a9c72bbac544): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:16:39 crc kubenswrapper[4877]: E1211 18:16:39.442985 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="b163a060-e7a9-4e81-992b-a9c72bbac544" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.525060 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcvgf\" (UniqueName: \"kubernetes.io/projected/e04dda18-fffa-489e-8c1f-b27530036418-kube-api-access-lcvgf\") pod \"e04dda18-fffa-489e-8c1f-b27530036418\" (UID: \"e04dda18-fffa-489e-8c1f-b27530036418\") " Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.525530 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e04dda18-fffa-489e-8c1f-b27530036418-dns-svc\") pod \"e04dda18-fffa-489e-8c1f-b27530036418\" (UID: \"e04dda18-fffa-489e-8c1f-b27530036418\") " Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.525639 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e04dda18-fffa-489e-8c1f-b27530036418-config\") pod \"e04dda18-fffa-489e-8c1f-b27530036418\" (UID: \"e04dda18-fffa-489e-8c1f-b27530036418\") " Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.529301 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04dda18-fffa-489e-8c1f-b27530036418-kube-api-access-lcvgf" (OuterVolumeSpecName: "kube-api-access-lcvgf") pod "e04dda18-fffa-489e-8c1f-b27530036418" (UID: "e04dda18-fffa-489e-8c1f-b27530036418"). InnerVolumeSpecName "kube-api-access-lcvgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.564396 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e04dda18-fffa-489e-8c1f-b27530036418-config" (OuterVolumeSpecName: "config") pod "e04dda18-fffa-489e-8c1f-b27530036418" (UID: "e04dda18-fffa-489e-8c1f-b27530036418"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.608837 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e04dda18-fffa-489e-8c1f-b27530036418-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e04dda18-fffa-489e-8c1f-b27530036418" (UID: "e04dda18-fffa-489e-8c1f-b27530036418"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.628676 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e04dda18-fffa-489e-8c1f-b27530036418-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.628709 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcvgf\" (UniqueName: \"kubernetes.io/projected/e04dda18-fffa-489e-8c1f-b27530036418-kube-api-access-lcvgf\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.628721 4877 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e04dda18-fffa-489e-8c1f-b27530036418-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.770053 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nfntl"] Dec 11 18:16:39 crc kubenswrapper[4877]: W1211 18:16:39.776048 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7a32890_1340_4640_b192_526f6c90e72e.slice/crio-d71524be336f4d3812f54ee2baa92fd5656fb38349bf1b53412c9f0d105ee711 WatchSource:0}: Error finding container d71524be336f4d3812f54ee2baa92fd5656fb38349bf1b53412c9f0d105ee711: Status 404 returned error can't find the container with id d71524be336f4d3812f54ee2baa92fd5656fb38349bf1b53412c9f0d105ee711 Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.837600 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ce9369cc-7934-4f85-9d10-e89f50e28710","Type":"ContainerStarted","Data":"0308ca7d49f58972a6a6a1eb0e2bda2a08eb4f8a891d18e36dd54e96aea203d5"} Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.840716 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" event={"ID":"c7a32890-1340-4640-b192-526f6c90e72e","Type":"ContainerStarted","Data":"d71524be336f4d3812f54ee2baa92fd5656fb38349bf1b53412c9f0d105ee711"} Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.844936 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e433a730-179e-4edf-93a9-9468b1714468","Type":"ContainerStarted","Data":"27f04218021e92d106f5e445a68abc9675cb8a95a6c1506dc1562005b5849db5"} Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.853306 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.853699 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-cp8fq" event={"ID":"e04dda18-fffa-489e-8c1f-b27530036418","Type":"ContainerDied","Data":"d35b9751f218d032ccf6948eff42fb4a6b3a983aafeb3ca67aeccf9997e1a25c"} Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.853775 4877 scope.go:117] "RemoveContainer" containerID="daf4fac3a360346e2390997935fe4df4f964311b735bb358f97359b6837f3776" Dec 11 18:16:39 crc kubenswrapper[4877]: E1211 18:16:39.856037 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="b163a060-e7a9-4e81-992b-a9c72bbac544" Dec 11 18:16:39 crc kubenswrapper[4877]: E1211 18:16:39.856855 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.858615 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.879228 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=28.395180183 podStartE2EDuration="36.879204468s" podCreationTimestamp="2025-12-11 18:16:03 +0000 UTC" firstStartedPulling="2025-12-11 18:16:17.482905244 +0000 UTC m=+938.509149288" lastFinishedPulling="2025-12-11 18:16:25.966929529 +0000 UTC m=+946.993173573" observedRunningTime="2025-12-11 18:16:39.87250119 +0000 UTC m=+960.898745244" watchObservedRunningTime="2025-12-11 18:16:39.879204468 +0000 UTC m=+960.905448512" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.958918 4877 scope.go:117] "RemoveContainer" containerID="821a264dc124eed1558ca3570b111126366823d123e633a34690ba88202b34a1" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.963165 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=29.207032732 podStartE2EDuration="37.963137718s" podCreationTimestamp="2025-12-11 18:16:02 +0000 UTC" firstStartedPulling="2025-12-11 18:16:17.107488725 +0000 UTC m=+938.133732779" lastFinishedPulling="2025-12-11 18:16:25.863593711 +0000 UTC m=+946.889837765" observedRunningTime="2025-12-11 18:16:39.959535862 +0000 UTC m=+960.985779916" watchObservedRunningTime="2025-12-11 18:16:39.963137718 +0000 UTC m=+960.989381762" Dec 11 18:16:39 crc kubenswrapper[4877]: I1211 18:16:39.993027 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-cp8fq"] Dec 11 18:16:40 crc kubenswrapper[4877]: I1211 18:16:40.001440 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-cp8fq"] Dec 11 18:16:40 crc kubenswrapper[4877]: I1211 18:16:40.345103 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:40 crc kubenswrapper[4877]: E1211 18:16:40.345831 4877 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 18:16:40 crc kubenswrapper[4877]: E1211 18:16:40.345874 4877 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 18:16:40 crc kubenswrapper[4877]: E1211 18:16:40.345964 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift podName:c6eb39a0-5f8c-44d1-b27e-c946c850a539 nodeName:}" failed. No retries permitted until 2025-12-11 18:16:42.345940284 +0000 UTC m=+963.372184328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift") pod "swift-storage-0" (UID: "c6eb39a0-5f8c-44d1-b27e-c946c850a539") : configmap "swift-ring-files" not found Dec 11 18:16:40 crc kubenswrapper[4877]: I1211 18:16:40.862616 4877 generic.go:334] "Generic (PLEG): container finished" podID="c7a32890-1340-4640-b192-526f6c90e72e" containerID="1be556a7cab86bd220f13c4a3bb5b9793560f6a9dd4e865164ecd02ed46d16e3" exitCode=0 Dec 11 18:16:40 crc kubenswrapper[4877]: I1211 18:16:40.863861 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" event={"ID":"c7a32890-1340-4640-b192-526f6c90e72e","Type":"ContainerDied","Data":"1be556a7cab86bd220f13c4a3bb5b9793560f6a9dd4e865164ecd02ed46d16e3"} Dec 11 18:16:40 crc kubenswrapper[4877]: E1211 18:16:40.865085 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde" Dec 11 18:16:41 crc kubenswrapper[4877]: I1211 18:16:41.229552 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e04dda18-fffa-489e-8c1f-b27530036418" path="/var/lib/kubelet/pods/e04dda18-fffa-489e-8c1f-b27530036418/volumes" Dec 11 18:16:41 crc kubenswrapper[4877]: I1211 18:16:41.651140 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:41 crc kubenswrapper[4877]: I1211 18:16:41.651223 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:41 crc kubenswrapper[4877]: E1211 18:16:41.654728 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="b163a060-e7a9-4e81-992b-a9c72bbac544" Dec 11 18:16:41 crc kubenswrapper[4877]: I1211 18:16:41.718929 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:41 crc kubenswrapper[4877]: I1211 18:16:41.859310 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:41 crc kubenswrapper[4877]: I1211 18:16:41.879310 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" event={"ID":"c7a32890-1340-4640-b192-526f6c90e72e","Type":"ContainerStarted","Data":"52a68b777dfc73ace634a14b4367da1ba72971db5dac33c5a162d17e68584a68"} Dec 11 18:16:41 crc kubenswrapper[4877]: I1211 18:16:41.879765 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:16:41 crc kubenswrapper[4877]: E1211 18:16:41.882168 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="b163a060-e7a9-4e81-992b-a9c72bbac544" Dec 11 18:16:41 crc kubenswrapper[4877]: E1211 18:16:41.882657 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde" Dec 11 18:16:41 crc kubenswrapper[4877]: I1211 18:16:41.913495 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" podStartSLOduration=4.913470417 podStartE2EDuration="4.913470417s" podCreationTimestamp="2025-12-11 18:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:16:41.9019908 +0000 UTC m=+962.928234874" watchObservedRunningTime="2025-12-11 18:16:41.913470417 +0000 UTC m=+962.939714471" Dec 11 18:16:41 crc kubenswrapper[4877]: I1211 18:16:41.925986 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 11 18:16:41 crc kubenswrapper[4877]: I1211 18:16:41.932886 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.380866 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:42 crc kubenswrapper[4877]: E1211 18:16:42.381147 4877 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 18:16:42 crc kubenswrapper[4877]: E1211 18:16:42.381185 4877 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 18:16:42 crc kubenswrapper[4877]: E1211 18:16:42.381269 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift podName:c6eb39a0-5f8c-44d1-b27e-c946c850a539 nodeName:}" failed. No retries permitted until 2025-12-11 18:16:46.38124439 +0000 UTC m=+967.407488434 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift") pod "swift-storage-0" (UID: "c6eb39a0-5f8c-44d1-b27e-c946c850a539") : configmap "swift-ring-files" not found Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.519516 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vdnmw"] Dec 11 18:16:42 crc kubenswrapper[4877]: E1211 18:16:42.519994 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04dda18-fffa-489e-8c1f-b27530036418" containerName="init" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.520021 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04dda18-fffa-489e-8c1f-b27530036418" containerName="init" Dec 11 18:16:42 crc kubenswrapper[4877]: E1211 18:16:42.520045 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04dda18-fffa-489e-8c1f-b27530036418" containerName="dnsmasq-dns" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.520054 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04dda18-fffa-489e-8c1f-b27530036418" containerName="dnsmasq-dns" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.520285 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="e04dda18-fffa-489e-8c1f-b27530036418" containerName="dnsmasq-dns" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.521059 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.524205 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.524253 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.524263 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.535258 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vdnmw"] Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.588650 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/daa2f87b-1f8a-423e-88f1-17150ab15ba0-etc-swift\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.588711 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/daa2f87b-1f8a-423e-88f1-17150ab15ba0-ring-data-devices\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.588786 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmwf5\" (UniqueName: \"kubernetes.io/projected/daa2f87b-1f8a-423e-88f1-17150ab15ba0-kube-api-access-hmwf5\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.588808 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daa2f87b-1f8a-423e-88f1-17150ab15ba0-scripts\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.588975 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-combined-ca-bundle\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.589042 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-swiftconf\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.589078 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-dispersionconf\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.690406 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmwf5\" (UniqueName: \"kubernetes.io/projected/daa2f87b-1f8a-423e-88f1-17150ab15ba0-kube-api-access-hmwf5\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.690454 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daa2f87b-1f8a-423e-88f1-17150ab15ba0-scripts\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.690507 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-combined-ca-bundle\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.690527 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-swiftconf\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.690548 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-dispersionconf\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.690612 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/daa2f87b-1f8a-423e-88f1-17150ab15ba0-etc-swift\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.690647 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/daa2f87b-1f8a-423e-88f1-17150ab15ba0-ring-data-devices\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.691345 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daa2f87b-1f8a-423e-88f1-17150ab15ba0-scripts\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.691413 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/daa2f87b-1f8a-423e-88f1-17150ab15ba0-etc-swift\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.691457 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/daa2f87b-1f8a-423e-88f1-17150ab15ba0-ring-data-devices\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.698131 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-dispersionconf\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.698274 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-combined-ca-bundle\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.699287 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-swiftconf\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.709561 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmwf5\" (UniqueName: \"kubernetes.io/projected/daa2f87b-1f8a-423e-88f1-17150ab15ba0-kube-api-access-hmwf5\") pod \"swift-ring-rebalance-vdnmw\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.878744 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-k6vcj" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.887434 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:42 crc kubenswrapper[4877]: E1211 18:16:42.890691 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="b163a060-e7a9-4e81-992b-a9c72bbac544" Dec 11 18:16:42 crc kubenswrapper[4877]: E1211 18:16:42.891201 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde" Dec 11 18:16:42 crc kubenswrapper[4877]: I1211 18:16:42.968759 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 11 18:16:43 crc kubenswrapper[4877]: I1211 18:16:43.397673 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vdnmw"] Dec 11 18:16:43 crc kubenswrapper[4877]: I1211 18:16:43.706403 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 11 18:16:43 crc kubenswrapper[4877]: I1211 18:16:43.706466 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 11 18:16:43 crc kubenswrapper[4877]: I1211 18:16:43.899993 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vdnmw" event={"ID":"daa2f87b-1f8a-423e-88f1-17150ab15ba0","Type":"ContainerStarted","Data":"d7c9c185813595999b734107bad02482ddf85f1be90fe05b55b7b233758a13ef"} Dec 11 18:16:43 crc kubenswrapper[4877]: E1211 18:16:43.904214 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde" Dec 11 18:16:45 crc kubenswrapper[4877]: I1211 18:16:45.015518 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:45 crc kubenswrapper[4877]: I1211 18:16:45.015871 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:45 crc kubenswrapper[4877]: I1211 18:16:45.092252 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:45 crc kubenswrapper[4877]: I1211 18:16:45.963287 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 11 18:16:46 crc kubenswrapper[4877]: I1211 18:16:46.003304 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 11 18:16:46 crc kubenswrapper[4877]: I1211 18:16:46.065727 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 11 18:16:46 crc kubenswrapper[4877]: I1211 18:16:46.381967 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:46 crc kubenswrapper[4877]: E1211 18:16:46.382498 4877 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 18:16:46 crc kubenswrapper[4877]: E1211 18:16:46.382549 4877 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 18:16:46 crc kubenswrapper[4877]: E1211 18:16:46.382628 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift podName:c6eb39a0-5f8c-44d1-b27e-c946c850a539 nodeName:}" failed. No retries permitted until 2025-12-11 18:16:54.382589354 +0000 UTC m=+975.408833398 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift") pod "swift-storage-0" (UID: "c6eb39a0-5f8c-44d1-b27e-c946c850a539") : configmap "swift-ring-files" not found Dec 11 18:16:46 crc kubenswrapper[4877]: I1211 18:16:46.641202 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:16:46 crc kubenswrapper[4877]: I1211 18:16:46.641318 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:16:47 crc kubenswrapper[4877]: I1211 18:16:47.813657 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:16:47 crc kubenswrapper[4877]: I1211 18:16:47.898546 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-j9rql"] Dec 11 18:16:47 crc kubenswrapper[4877]: I1211 18:16:47.898884 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" podUID="d4d97300-ea2c-4e95-86ae-061bc789ab6d" containerName="dnsmasq-dns" containerID="cri-o://2fc8f67c133300180ace1ecedd757063b95474a047e58ffd1ef384c720095c3c" gracePeriod=10 Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.586864 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.725810 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbzrl\" (UniqueName: \"kubernetes.io/projected/d4d97300-ea2c-4e95-86ae-061bc789ab6d-kube-api-access-fbzrl\") pod \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\" (UID: \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\") " Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.725978 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d97300-ea2c-4e95-86ae-061bc789ab6d-config\") pod \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\" (UID: \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\") " Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.726069 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4d97300-ea2c-4e95-86ae-061bc789ab6d-dns-svc\") pod \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\" (UID: \"d4d97300-ea2c-4e95-86ae-061bc789ab6d\") " Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.730734 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d97300-ea2c-4e95-86ae-061bc789ab6d-kube-api-access-fbzrl" (OuterVolumeSpecName: "kube-api-access-fbzrl") pod "d4d97300-ea2c-4e95-86ae-061bc789ab6d" (UID: "d4d97300-ea2c-4e95-86ae-061bc789ab6d"). InnerVolumeSpecName "kube-api-access-fbzrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.763586 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4d97300-ea2c-4e95-86ae-061bc789ab6d-config" (OuterVolumeSpecName: "config") pod "d4d97300-ea2c-4e95-86ae-061bc789ab6d" (UID: "d4d97300-ea2c-4e95-86ae-061bc789ab6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.773143 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4d97300-ea2c-4e95-86ae-061bc789ab6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4d97300-ea2c-4e95-86ae-061bc789ab6d" (UID: "d4d97300-ea2c-4e95-86ae-061bc789ab6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.828936 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbzrl\" (UniqueName: \"kubernetes.io/projected/d4d97300-ea2c-4e95-86ae-061bc789ab6d-kube-api-access-fbzrl\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.828984 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d97300-ea2c-4e95-86ae-061bc789ab6d-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.828998 4877 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4d97300-ea2c-4e95-86ae-061bc789ab6d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.949285 4877 generic.go:334] "Generic (PLEG): container finished" podID="d4d97300-ea2c-4e95-86ae-061bc789ab6d" containerID="2fc8f67c133300180ace1ecedd757063b95474a047e58ffd1ef384c720095c3c" exitCode=0 Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.949361 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" event={"ID":"d4d97300-ea2c-4e95-86ae-061bc789ab6d","Type":"ContainerDied","Data":"2fc8f67c133300180ace1ecedd757063b95474a047e58ffd1ef384c720095c3c"} Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.949416 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" event={"ID":"d4d97300-ea2c-4e95-86ae-061bc789ab6d","Type":"ContainerDied","Data":"ff2ac50430f1d29c670bee61720efda2b01c1898116ddc08803811b6de630817"} Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.949437 4877 scope.go:117] "RemoveContainer" containerID="2fc8f67c133300180ace1ecedd757063b95474a047e58ffd1ef384c720095c3c" Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.949468 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-j9rql" Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.954239 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vdnmw" event={"ID":"daa2f87b-1f8a-423e-88f1-17150ab15ba0","Type":"ContainerStarted","Data":"f944e493e2811d1a09f9eb5ed847cb04b83d7ec316b4bfe3e04e0bc1e947c7aa"} Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.972798 4877 scope.go:117] "RemoveContainer" containerID="e4680fc4c678938621030bdd7c639d14d6fa6aaa90981e326ccfb442f4ebbab8" Dec 11 18:16:48 crc kubenswrapper[4877]: I1211 18:16:48.985362 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-vdnmw" podStartSLOduration=2.018809393 podStartE2EDuration="6.985337214s" podCreationTimestamp="2025-12-11 18:16:42 +0000 UTC" firstStartedPulling="2025-12-11 18:16:43.399352071 +0000 UTC m=+964.425596115" lastFinishedPulling="2025-12-11 18:16:48.365879872 +0000 UTC m=+969.392123936" observedRunningTime="2025-12-11 18:16:48.977828753 +0000 UTC m=+970.004072837" watchObservedRunningTime="2025-12-11 18:16:48.985337214 +0000 UTC m=+970.011581278" Dec 11 18:16:49 crc kubenswrapper[4877]: I1211 18:16:49.001931 4877 scope.go:117] "RemoveContainer" containerID="2fc8f67c133300180ace1ecedd757063b95474a047e58ffd1ef384c720095c3c" Dec 11 18:16:49 crc kubenswrapper[4877]: E1211 18:16:49.002360 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fc8f67c133300180ace1ecedd757063b95474a047e58ffd1ef384c720095c3c\": container with ID starting with 2fc8f67c133300180ace1ecedd757063b95474a047e58ffd1ef384c720095c3c not found: ID does not exist" containerID="2fc8f67c133300180ace1ecedd757063b95474a047e58ffd1ef384c720095c3c" Dec 11 18:16:49 crc kubenswrapper[4877]: I1211 18:16:49.002441 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fc8f67c133300180ace1ecedd757063b95474a047e58ffd1ef384c720095c3c"} err="failed to get container status \"2fc8f67c133300180ace1ecedd757063b95474a047e58ffd1ef384c720095c3c\": rpc error: code = NotFound desc = could not find container \"2fc8f67c133300180ace1ecedd757063b95474a047e58ffd1ef384c720095c3c\": container with ID starting with 2fc8f67c133300180ace1ecedd757063b95474a047e58ffd1ef384c720095c3c not found: ID does not exist" Dec 11 18:16:49 crc kubenswrapper[4877]: I1211 18:16:49.002465 4877 scope.go:117] "RemoveContainer" containerID="e4680fc4c678938621030bdd7c639d14d6fa6aaa90981e326ccfb442f4ebbab8" Dec 11 18:16:49 crc kubenswrapper[4877]: E1211 18:16:49.002922 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4680fc4c678938621030bdd7c639d14d6fa6aaa90981e326ccfb442f4ebbab8\": container with ID starting with e4680fc4c678938621030bdd7c639d14d6fa6aaa90981e326ccfb442f4ebbab8 not found: ID does not exist" containerID="e4680fc4c678938621030bdd7c639d14d6fa6aaa90981e326ccfb442f4ebbab8" Dec 11 18:16:49 crc kubenswrapper[4877]: I1211 18:16:49.002996 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4680fc4c678938621030bdd7c639d14d6fa6aaa90981e326ccfb442f4ebbab8"} err="failed to get container status \"e4680fc4c678938621030bdd7c639d14d6fa6aaa90981e326ccfb442f4ebbab8\": rpc error: code = NotFound desc = could not find container \"e4680fc4c678938621030bdd7c639d14d6fa6aaa90981e326ccfb442f4ebbab8\": container with ID starting with e4680fc4c678938621030bdd7c639d14d6fa6aaa90981e326ccfb442f4ebbab8 not found: ID does not exist" Dec 11 18:16:49 crc kubenswrapper[4877]: I1211 18:16:49.007089 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-j9rql"] Dec 11 18:16:49 crc kubenswrapper[4877]: I1211 18:16:49.015663 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-j9rql"] Dec 11 18:16:49 crc kubenswrapper[4877]: I1211 18:16:49.231226 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d97300-ea2c-4e95-86ae-061bc789ab6d" path="/var/lib/kubelet/pods/d4d97300-ea2c-4e95-86ae-061bc789ab6d/volumes" Dec 11 18:16:54 crc kubenswrapper[4877]: I1211 18:16:54.440292 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:16:54 crc kubenswrapper[4877]: E1211 18:16:54.440653 4877 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 18:16:54 crc kubenswrapper[4877]: E1211 18:16:54.441410 4877 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 18:16:54 crc kubenswrapper[4877]: E1211 18:16:54.441519 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift podName:c6eb39a0-5f8c-44d1-b27e-c946c850a539 nodeName:}" failed. No retries permitted until 2025-12-11 18:17:10.441489374 +0000 UTC m=+991.467733458 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift") pod "swift-storage-0" (UID: "c6eb39a0-5f8c-44d1-b27e-c946c850a539") : configmap "swift-ring-files" not found Dec 11 18:16:54 crc kubenswrapper[4877]: I1211 18:16:54.869859 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3b79-account-create-update-7278w"] Dec 11 18:16:54 crc kubenswrapper[4877]: E1211 18:16:54.872467 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d97300-ea2c-4e95-86ae-061bc789ab6d" containerName="init" Dec 11 18:16:54 crc kubenswrapper[4877]: I1211 18:16:54.872586 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d97300-ea2c-4e95-86ae-061bc789ab6d" containerName="init" Dec 11 18:16:54 crc kubenswrapper[4877]: E1211 18:16:54.872673 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d97300-ea2c-4e95-86ae-061bc789ab6d" containerName="dnsmasq-dns" Dec 11 18:16:54 crc kubenswrapper[4877]: I1211 18:16:54.872731 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d97300-ea2c-4e95-86ae-061bc789ab6d" containerName="dnsmasq-dns" Dec 11 18:16:54 crc kubenswrapper[4877]: I1211 18:16:54.874354 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d97300-ea2c-4e95-86ae-061bc789ab6d" containerName="dnsmasq-dns" Dec 11 18:16:54 crc kubenswrapper[4877]: I1211 18:16:54.875532 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3b79-account-create-update-7278w" Dec 11 18:16:54 crc kubenswrapper[4877]: I1211 18:16:54.878901 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 11 18:16:54 crc kubenswrapper[4877]: I1211 18:16:54.888146 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3b79-account-create-update-7278w"] Dec 11 18:16:54 crc kubenswrapper[4877]: I1211 18:16:54.932160 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6whzf"] Dec 11 18:16:54 crc kubenswrapper[4877]: I1211 18:16:54.933598 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6whzf" Dec 11 18:16:54 crc kubenswrapper[4877]: I1211 18:16:54.942096 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6whzf"] Dec 11 18:16:54 crc kubenswrapper[4877]: I1211 18:16:54.954508 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvrlr\" (UniqueName: \"kubernetes.io/projected/00780885-fa23-4d5f-a7f2-b5fb9bb5add9-kube-api-access-pvrlr\") pod \"keystone-3b79-account-create-update-7278w\" (UID: \"00780885-fa23-4d5f-a7f2-b5fb9bb5add9\") " pod="openstack/keystone-3b79-account-create-update-7278w" Dec 11 18:16:54 crc kubenswrapper[4877]: I1211 18:16:54.954599 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00780885-fa23-4d5f-a7f2-b5fb9bb5add9-operator-scripts\") pod \"keystone-3b79-account-create-update-7278w\" (UID: \"00780885-fa23-4d5f-a7f2-b5fb9bb5add9\") " pod="openstack/keystone-3b79-account-create-update-7278w" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.056820 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00780885-fa23-4d5f-a7f2-b5fb9bb5add9-operator-scripts\") pod \"keystone-3b79-account-create-update-7278w\" (UID: \"00780885-fa23-4d5f-a7f2-b5fb9bb5add9\") " pod="openstack/keystone-3b79-account-create-update-7278w" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.056941 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d31530ff-c24d-4d0b-a197-4a1d1b638990-operator-scripts\") pod \"keystone-db-create-6whzf\" (UID: \"d31530ff-c24d-4d0b-a197-4a1d1b638990\") " pod="openstack/keystone-db-create-6whzf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.057011 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbvw6\" (UniqueName: \"kubernetes.io/projected/d31530ff-c24d-4d0b-a197-4a1d1b638990-kube-api-access-pbvw6\") pod \"keystone-db-create-6whzf\" (UID: \"d31530ff-c24d-4d0b-a197-4a1d1b638990\") " pod="openstack/keystone-db-create-6whzf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.057057 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvrlr\" (UniqueName: \"kubernetes.io/projected/00780885-fa23-4d5f-a7f2-b5fb9bb5add9-kube-api-access-pvrlr\") pod \"keystone-3b79-account-create-update-7278w\" (UID: \"00780885-fa23-4d5f-a7f2-b5fb9bb5add9\") " pod="openstack/keystone-3b79-account-create-update-7278w" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.057859 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00780885-fa23-4d5f-a7f2-b5fb9bb5add9-operator-scripts\") pod \"keystone-3b79-account-create-update-7278w\" (UID: \"00780885-fa23-4d5f-a7f2-b5fb9bb5add9\") " pod="openstack/keystone-3b79-account-create-update-7278w" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.081809 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvrlr\" (UniqueName: \"kubernetes.io/projected/00780885-fa23-4d5f-a7f2-b5fb9bb5add9-kube-api-access-pvrlr\") pod \"keystone-3b79-account-create-update-7278w\" (UID: \"00780885-fa23-4d5f-a7f2-b5fb9bb5add9\") " pod="openstack/keystone-3b79-account-create-update-7278w" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.136486 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-4l7l6"] Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.138034 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4l7l6" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.150916 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4l7l6"] Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.161408 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbvw6\" (UniqueName: \"kubernetes.io/projected/d31530ff-c24d-4d0b-a197-4a1d1b638990-kube-api-access-pbvw6\") pod \"keystone-db-create-6whzf\" (UID: \"d31530ff-c24d-4d0b-a197-4a1d1b638990\") " pod="openstack/keystone-db-create-6whzf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.161552 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d31530ff-c24d-4d0b-a197-4a1d1b638990-operator-scripts\") pod \"keystone-db-create-6whzf\" (UID: \"d31530ff-c24d-4d0b-a197-4a1d1b638990\") " pod="openstack/keystone-db-create-6whzf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.162280 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d31530ff-c24d-4d0b-a197-4a1d1b638990-operator-scripts\") pod \"keystone-db-create-6whzf\" (UID: \"d31530ff-c24d-4d0b-a197-4a1d1b638990\") " pod="openstack/keystone-db-create-6whzf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.188419 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbvw6\" (UniqueName: \"kubernetes.io/projected/d31530ff-c24d-4d0b-a197-4a1d1b638990-kube-api-access-pbvw6\") pod \"keystone-db-create-6whzf\" (UID: \"d31530ff-c24d-4d0b-a197-4a1d1b638990\") " pod="openstack/keystone-db-create-6whzf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.194163 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-88d2-account-create-update-npshn"] Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.195969 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-88d2-account-create-update-npshn" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.198586 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.202153 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-88d2-account-create-update-npshn"] Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.216471 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3b79-account-create-update-7278w" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.255778 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6whzf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.263840 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da28a6a2-f437-44c5-9fb3-60cdbe0523d9-operator-scripts\") pod \"placement-88d2-account-create-update-npshn\" (UID: \"da28a6a2-f437-44c5-9fb3-60cdbe0523d9\") " pod="openstack/placement-88d2-account-create-update-npshn" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.264446 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63d99b5-f6da-4796-a4a6-5b299fc7b55d-operator-scripts\") pod \"placement-db-create-4l7l6\" (UID: \"e63d99b5-f6da-4796-a4a6-5b299fc7b55d\") " pod="openstack/placement-db-create-4l7l6" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.264603 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7mh7\" (UniqueName: \"kubernetes.io/projected/da28a6a2-f437-44c5-9fb3-60cdbe0523d9-kube-api-access-m7mh7\") pod \"placement-88d2-account-create-update-npshn\" (UID: \"da28a6a2-f437-44c5-9fb3-60cdbe0523d9\") " pod="openstack/placement-88d2-account-create-update-npshn" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.264662 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgd7h\" (UniqueName: \"kubernetes.io/projected/e63d99b5-f6da-4796-a4a6-5b299fc7b55d-kube-api-access-sgd7h\") pod \"placement-db-create-4l7l6\" (UID: \"e63d99b5-f6da-4796-a4a6-5b299fc7b55d\") " pod="openstack/placement-db-create-4l7l6" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.368976 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da28a6a2-f437-44c5-9fb3-60cdbe0523d9-operator-scripts\") pod \"placement-88d2-account-create-update-npshn\" (UID: \"da28a6a2-f437-44c5-9fb3-60cdbe0523d9\") " pod="openstack/placement-88d2-account-create-update-npshn" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.369060 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63d99b5-f6da-4796-a4a6-5b299fc7b55d-operator-scripts\") pod \"placement-db-create-4l7l6\" (UID: \"e63d99b5-f6da-4796-a4a6-5b299fc7b55d\") " pod="openstack/placement-db-create-4l7l6" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.369110 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7mh7\" (UniqueName: \"kubernetes.io/projected/da28a6a2-f437-44c5-9fb3-60cdbe0523d9-kube-api-access-m7mh7\") pod \"placement-88d2-account-create-update-npshn\" (UID: \"da28a6a2-f437-44c5-9fb3-60cdbe0523d9\") " pod="openstack/placement-88d2-account-create-update-npshn" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.369134 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgd7h\" (UniqueName: \"kubernetes.io/projected/e63d99b5-f6da-4796-a4a6-5b299fc7b55d-kube-api-access-sgd7h\") pod \"placement-db-create-4l7l6\" (UID: \"e63d99b5-f6da-4796-a4a6-5b299fc7b55d\") " pod="openstack/placement-db-create-4l7l6" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.370917 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da28a6a2-f437-44c5-9fb3-60cdbe0523d9-operator-scripts\") pod \"placement-88d2-account-create-update-npshn\" (UID: \"da28a6a2-f437-44c5-9fb3-60cdbe0523d9\") " pod="openstack/placement-88d2-account-create-update-npshn" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.371717 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63d99b5-f6da-4796-a4a6-5b299fc7b55d-operator-scripts\") pod \"placement-db-create-4l7l6\" (UID: \"e63d99b5-f6da-4796-a4a6-5b299fc7b55d\") " pod="openstack/placement-db-create-4l7l6" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.388512 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-zhlnf"] Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.390251 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zhlnf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.395597 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgd7h\" (UniqueName: \"kubernetes.io/projected/e63d99b5-f6da-4796-a4a6-5b299fc7b55d-kube-api-access-sgd7h\") pod \"placement-db-create-4l7l6\" (UID: \"e63d99b5-f6da-4796-a4a6-5b299fc7b55d\") " pod="openstack/placement-db-create-4l7l6" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.395703 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7mh7\" (UniqueName: \"kubernetes.io/projected/da28a6a2-f437-44c5-9fb3-60cdbe0523d9-kube-api-access-m7mh7\") pod \"placement-88d2-account-create-update-npshn\" (UID: \"da28a6a2-f437-44c5-9fb3-60cdbe0523d9\") " pod="openstack/placement-88d2-account-create-update-npshn" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.405469 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-zhlnf"] Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.461258 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4l7l6" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.473684 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3617227-9338-4854-b9bd-dc2416275371-operator-scripts\") pod \"glance-db-create-zhlnf\" (UID: \"e3617227-9338-4854-b9bd-dc2416275371\") " pod="openstack/glance-db-create-zhlnf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.473823 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lfj6\" (UniqueName: \"kubernetes.io/projected/e3617227-9338-4854-b9bd-dc2416275371-kube-api-access-8lfj6\") pod \"glance-db-create-zhlnf\" (UID: \"e3617227-9338-4854-b9bd-dc2416275371\") " pod="openstack/glance-db-create-zhlnf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.491701 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4297-account-create-update-kkkdm"] Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.492893 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4297-account-create-update-kkkdm" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.496166 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.500298 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4297-account-create-update-kkkdm"] Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.535365 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-88d2-account-create-update-npshn" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.576124 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lfj6\" (UniqueName: \"kubernetes.io/projected/e3617227-9338-4854-b9bd-dc2416275371-kube-api-access-8lfj6\") pod \"glance-db-create-zhlnf\" (UID: \"e3617227-9338-4854-b9bd-dc2416275371\") " pod="openstack/glance-db-create-zhlnf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.576228 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd1dca0-c7b0-4a84-893a-73db70d80919-operator-scripts\") pod \"glance-4297-account-create-update-kkkdm\" (UID: \"9bd1dca0-c7b0-4a84-893a-73db70d80919\") " pod="openstack/glance-4297-account-create-update-kkkdm" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.576264 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3617227-9338-4854-b9bd-dc2416275371-operator-scripts\") pod \"glance-db-create-zhlnf\" (UID: \"e3617227-9338-4854-b9bd-dc2416275371\") " pod="openstack/glance-db-create-zhlnf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.576304 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhc8x\" (UniqueName: \"kubernetes.io/projected/9bd1dca0-c7b0-4a84-893a-73db70d80919-kube-api-access-vhc8x\") pod \"glance-4297-account-create-update-kkkdm\" (UID: \"9bd1dca0-c7b0-4a84-893a-73db70d80919\") " pod="openstack/glance-4297-account-create-update-kkkdm" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.578126 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3617227-9338-4854-b9bd-dc2416275371-operator-scripts\") pod \"glance-db-create-zhlnf\" (UID: \"e3617227-9338-4854-b9bd-dc2416275371\") " pod="openstack/glance-db-create-zhlnf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.598648 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lfj6\" (UniqueName: \"kubernetes.io/projected/e3617227-9338-4854-b9bd-dc2416275371-kube-api-access-8lfj6\") pod \"glance-db-create-zhlnf\" (UID: \"e3617227-9338-4854-b9bd-dc2416275371\") " pod="openstack/glance-db-create-zhlnf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.678473 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd1dca0-c7b0-4a84-893a-73db70d80919-operator-scripts\") pod \"glance-4297-account-create-update-kkkdm\" (UID: \"9bd1dca0-c7b0-4a84-893a-73db70d80919\") " pod="openstack/glance-4297-account-create-update-kkkdm" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.678534 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhc8x\" (UniqueName: \"kubernetes.io/projected/9bd1dca0-c7b0-4a84-893a-73db70d80919-kube-api-access-vhc8x\") pod \"glance-4297-account-create-update-kkkdm\" (UID: \"9bd1dca0-c7b0-4a84-893a-73db70d80919\") " pod="openstack/glance-4297-account-create-update-kkkdm" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.679439 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd1dca0-c7b0-4a84-893a-73db70d80919-operator-scripts\") pod \"glance-4297-account-create-update-kkkdm\" (UID: \"9bd1dca0-c7b0-4a84-893a-73db70d80919\") " pod="openstack/glance-4297-account-create-update-kkkdm" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.696726 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhc8x\" (UniqueName: \"kubernetes.io/projected/9bd1dca0-c7b0-4a84-893a-73db70d80919-kube-api-access-vhc8x\") pod \"glance-4297-account-create-update-kkkdm\" (UID: \"9bd1dca0-c7b0-4a84-893a-73db70d80919\") " pod="openstack/glance-4297-account-create-update-kkkdm" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.728101 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zhlnf" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.774955 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3b79-account-create-update-7278w"] Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.812423 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4297-account-create-update-kkkdm" Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.850637 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6whzf"] Dec 11 18:16:55 crc kubenswrapper[4877]: W1211 18:16:55.872343 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd31530ff_c24d_4d0b_a197_4a1d1b638990.slice/crio-60465b6a3e24d2d6f4e35df9d25167094b9753bf1afeb6d41eb18d43222452ee WatchSource:0}: Error finding container 60465b6a3e24d2d6f4e35df9d25167094b9753bf1afeb6d41eb18d43222452ee: Status 404 returned error can't find the container with id 60465b6a3e24d2d6f4e35df9d25167094b9753bf1afeb6d41eb18d43222452ee Dec 11 18:16:55 crc kubenswrapper[4877]: I1211 18:16:55.938300 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4l7l6"] Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.037484 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-zhlnf"] Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.062567 4877 generic.go:334] "Generic (PLEG): container finished" podID="daa2f87b-1f8a-423e-88f1-17150ab15ba0" containerID="f944e493e2811d1a09f9eb5ed847cb04b83d7ec316b4bfe3e04e0bc1e947c7aa" exitCode=0 Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.062637 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vdnmw" event={"ID":"daa2f87b-1f8a-423e-88f1-17150ab15ba0","Type":"ContainerDied","Data":"f944e493e2811d1a09f9eb5ed847cb04b83d7ec316b4bfe3e04e0bc1e947c7aa"} Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.064318 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-88d2-account-create-update-npshn"] Dec 11 18:16:56 crc kubenswrapper[4877]: W1211 18:16:56.086287 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda28a6a2_f437_44c5_9fb3_60cdbe0523d9.slice/crio-b72077b9bb2665bce4b48cc3ceced2d4920106ff9b65e83556ed32fc0fed88bb WatchSource:0}: Error finding container b72077b9bb2665bce4b48cc3ceced2d4920106ff9b65e83556ed32fc0fed88bb: Status 404 returned error can't find the container with id b72077b9bb2665bce4b48cc3ceced2d4920106ff9b65e83556ed32fc0fed88bb Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.087181 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zhlnf" event={"ID":"e3617227-9338-4854-b9bd-dc2416275371","Type":"ContainerStarted","Data":"93cc8403a0805452b1ad946f5d7f0f2057cdfb39767ef13279cdd524b0f2570f"} Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.096792 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3b79-account-create-update-7278w" event={"ID":"00780885-fa23-4d5f-a7f2-b5fb9bb5add9","Type":"ContainerStarted","Data":"6073bbf4b9e14711e2b0d582290b6a061c2ba7f6f569a41270f3ca82ac241837"} Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.096844 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3b79-account-create-update-7278w" event={"ID":"00780885-fa23-4d5f-a7f2-b5fb9bb5add9","Type":"ContainerStarted","Data":"8ec779e4a24ab6f6d789589ed7dd47e2994ab7d7bf06cc91c2da6d2ad0af137e"} Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.102345 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6whzf" event={"ID":"d31530ff-c24d-4d0b-a197-4a1d1b638990","Type":"ContainerStarted","Data":"60465b6a3e24d2d6f4e35df9d25167094b9753bf1afeb6d41eb18d43222452ee"} Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.103488 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4l7l6" event={"ID":"e63d99b5-f6da-4796-a4a6-5b299fc7b55d","Type":"ContainerStarted","Data":"644c31b037f2ff4db37493f50abe7f4addb655ea3bdba9189f79336ffcff5125"} Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.136623 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-3b79-account-create-update-7278w" podStartSLOduration=2.136596431 podStartE2EDuration="2.136596431s" podCreationTimestamp="2025-12-11 18:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:16:56.120912152 +0000 UTC m=+977.147156206" watchObservedRunningTime="2025-12-11 18:16:56.136596431 +0000 UTC m=+977.162840465" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.385707 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4297-account-create-update-kkkdm"] Dec 11 18:16:56 crc kubenswrapper[4877]: W1211 18:16:56.408775 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bd1dca0_c7b0_4a84_893a_73db70d80919.slice/crio-d77a0bdd255da114661bc756cd4883ca783f55d0175de47a9d38245b79354584 WatchSource:0}: Error finding container d77a0bdd255da114661bc756cd4883ca783f55d0175de47a9d38245b79354584: Status 404 returned error can't find the container with id d77a0bdd255da114661bc756cd4883ca783f55d0175de47a9d38245b79354584 Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.693886 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-f2twl"] Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.695419 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.699148 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.711909 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-f2twl"] Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.816755 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7600f73e-c321-4ec0-af52-684b7b75ec9f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.816863 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzfn\" (UniqueName: \"kubernetes.io/projected/7600f73e-c321-4ec0-af52-684b7b75ec9f-kube-api-access-jmzfn\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.816891 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7600f73e-c321-4ec0-af52-684b7b75ec9f-combined-ca-bundle\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.816921 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7600f73e-c321-4ec0-af52-684b7b75ec9f-config\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.816959 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7600f73e-c321-4ec0-af52-684b7b75ec9f-ovs-rundir\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.816984 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7600f73e-c321-4ec0-af52-684b7b75ec9f-ovn-rundir\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.842461 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-xlsmm"] Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.844678 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.847361 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.853845 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-xlsmm"] Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.918195 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-config\") pod \"dnsmasq-dns-74f6f696b9-xlsmm\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.918252 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6srs\" (UniqueName: \"kubernetes.io/projected/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-kube-api-access-x6srs\") pod \"dnsmasq-dns-74f6f696b9-xlsmm\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.918283 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmzfn\" (UniqueName: \"kubernetes.io/projected/7600f73e-c321-4ec0-af52-684b7b75ec9f-kube-api-access-jmzfn\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.918314 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7600f73e-c321-4ec0-af52-684b7b75ec9f-combined-ca-bundle\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.918345 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7600f73e-c321-4ec0-af52-684b7b75ec9f-config\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.918399 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7600f73e-c321-4ec0-af52-684b7b75ec9f-ovs-rundir\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.918423 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7600f73e-c321-4ec0-af52-684b7b75ec9f-ovn-rundir\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.918444 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-xlsmm\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.918466 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-xlsmm\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.918508 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7600f73e-c321-4ec0-af52-684b7b75ec9f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.919729 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7600f73e-c321-4ec0-af52-684b7b75ec9f-ovn-rundir\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.919775 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7600f73e-c321-4ec0-af52-684b7b75ec9f-ovs-rundir\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.920505 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7600f73e-c321-4ec0-af52-684b7b75ec9f-config\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.932180 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7600f73e-c321-4ec0-af52-684b7b75ec9f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.938105 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7600f73e-c321-4ec0-af52-684b7b75ec9f-combined-ca-bundle\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.951594 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmzfn\" (UniqueName: \"kubernetes.io/projected/7600f73e-c321-4ec0-af52-684b7b75ec9f-kube-api-access-jmzfn\") pod \"ovn-controller-metrics-f2twl\" (UID: \"7600f73e-c321-4ec0-af52-684b7b75ec9f\") " pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.969152 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-xlsmm"] Dec 11 18:16:56 crc kubenswrapper[4877]: E1211 18:16:56.969696 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-x6srs ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" podUID="c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c" Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.997472 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-k8mn8"] Dec 11 18:16:56 crc kubenswrapper[4877]: I1211 18:16:56.999350 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.002342 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.015139 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-k8mn8"] Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.019097 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-f2twl" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.032186 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-xlsmm\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.032250 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-xlsmm\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.032352 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-config\") pod \"dnsmasq-dns-74f6f696b9-xlsmm\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.032397 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6srs\" (UniqueName: \"kubernetes.io/projected/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-kube-api-access-x6srs\") pod \"dnsmasq-dns-74f6f696b9-xlsmm\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.033828 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-xlsmm\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.034243 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-xlsmm\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.034543 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-config\") pod \"dnsmasq-dns-74f6f696b9-xlsmm\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.053066 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6srs\" (UniqueName: \"kubernetes.io/projected/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-kube-api-access-x6srs\") pod \"dnsmasq-dns-74f6f696b9-xlsmm\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.124205 4877 generic.go:334] "Generic (PLEG): container finished" podID="da28a6a2-f437-44c5-9fb3-60cdbe0523d9" containerID="464d9445c4c9aa30f57ee8ab4a1d59cbbb232ce5ad854cfe930f7384e96bc83d" exitCode=0 Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.124730 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-88d2-account-create-update-npshn" event={"ID":"da28a6a2-f437-44c5-9fb3-60cdbe0523d9","Type":"ContainerDied","Data":"464d9445c4c9aa30f57ee8ab4a1d59cbbb232ce5ad854cfe930f7384e96bc83d"} Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.124761 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-88d2-account-create-update-npshn" event={"ID":"da28a6a2-f437-44c5-9fb3-60cdbe0523d9","Type":"ContainerStarted","Data":"b72077b9bb2665bce4b48cc3ceced2d4920106ff9b65e83556ed32fc0fed88bb"} Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.131480 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b163a060-e7a9-4e81-992b-a9c72bbac544","Type":"ContainerStarted","Data":"dee786a2f7c65e43855af5cef0e6fc34672df5939b703bcf4c43f5ddf9d9dc82"} Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.133737 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.133789 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-config\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.133826 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-dns-svc\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.133851 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.133961 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljxgs\" (UniqueName: \"kubernetes.io/projected/50d772db-0b17-4c84-b0b9-29deb9a368f2-kube-api-access-ljxgs\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.137447 4877 generic.go:334] "Generic (PLEG): container finished" podID="e3617227-9338-4854-b9bd-dc2416275371" containerID="54ad9c8b5261b9257a44e023a4b5313664aa7d61131e4111d7bae8aea5d964da" exitCode=0 Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.137511 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zhlnf" event={"ID":"e3617227-9338-4854-b9bd-dc2416275371","Type":"ContainerDied","Data":"54ad9c8b5261b9257a44e023a4b5313664aa7d61131e4111d7bae8aea5d964da"} Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.138944 4877 generic.go:334] "Generic (PLEG): container finished" podID="00780885-fa23-4d5f-a7f2-b5fb9bb5add9" containerID="6073bbf4b9e14711e2b0d582290b6a061c2ba7f6f569a41270f3ca82ac241837" exitCode=0 Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.138993 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3b79-account-create-update-7278w" event={"ID":"00780885-fa23-4d5f-a7f2-b5fb9bb5add9","Type":"ContainerDied","Data":"6073bbf4b9e14711e2b0d582290b6a061c2ba7f6f569a41270f3ca82ac241837"} Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.140094 4877 generic.go:334] "Generic (PLEG): container finished" podID="d31530ff-c24d-4d0b-a197-4a1d1b638990" containerID="b19a3510152bb7b7814140b6795acbb0b94a4bbb6bf38d3747952d6e904b024d" exitCode=0 Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.141365 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6whzf" event={"ID":"d31530ff-c24d-4d0b-a197-4a1d1b638990","Type":"ContainerDied","Data":"b19a3510152bb7b7814140b6795acbb0b94a4bbb6bf38d3747952d6e904b024d"} Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.152052 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4297-account-create-update-kkkdm" event={"ID":"9bd1dca0-c7b0-4a84-893a-73db70d80919","Type":"ContainerStarted","Data":"4b764e2622b2f237a20ae7d1465cf185b8d6427ccf5672b753062307a9c67f46"} Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.152092 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4297-account-create-update-kkkdm" event={"ID":"9bd1dca0-c7b0-4a84-893a-73db70d80919","Type":"ContainerStarted","Data":"d77a0bdd255da114661bc756cd4883ca783f55d0175de47a9d38245b79354584"} Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.153257 4877 generic.go:334] "Generic (PLEG): container finished" podID="e63d99b5-f6da-4796-a4a6-5b299fc7b55d" containerID="c152a4ad35508cda15d01261a25ab99d9aa1b55b4244c57d921680c2a6adde95" exitCode=0 Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.153452 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4l7l6" event={"ID":"e63d99b5-f6da-4796-a4a6-5b299fc7b55d","Type":"ContainerDied","Data":"c152a4ad35508cda15d01261a25ab99d9aa1b55b4244c57d921680c2a6adde95"} Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.153492 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.183953 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.045027103 podStartE2EDuration="47.183930211s" podCreationTimestamp="2025-12-11 18:16:10 +0000 UTC" firstStartedPulling="2025-12-11 18:16:17.639599526 +0000 UTC m=+938.665843570" lastFinishedPulling="2025-12-11 18:16:55.778502634 +0000 UTC m=+976.804746678" observedRunningTime="2025-12-11 18:16:57.171972532 +0000 UTC m=+978.198216586" watchObservedRunningTime="2025-12-11 18:16:57.183930211 +0000 UTC m=+978.210174255" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.201637 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.245593 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljxgs\" (UniqueName: \"kubernetes.io/projected/50d772db-0b17-4c84-b0b9-29deb9a368f2-kube-api-access-ljxgs\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.245715 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.245788 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-config\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.245825 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-dns-svc\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.245862 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.247565 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-config\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.248148 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.248803 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-dns-svc\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.250106 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.271504 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljxgs\" (UniqueName: \"kubernetes.io/projected/50d772db-0b17-4c84-b0b9-29deb9a368f2-kube-api-access-ljxgs\") pod \"dnsmasq-dns-698758b865-k8mn8\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.314577 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.347141 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-ovsdbserver-nb\") pod \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.347624 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6srs\" (UniqueName: \"kubernetes.io/projected/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-kube-api-access-x6srs\") pod \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.347796 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-dns-svc\") pod \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.347868 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c" (UID: "c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.348012 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-config\") pod \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\" (UID: \"c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c\") " Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.348788 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c" (UID: "c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.349131 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.349161 4877 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.350568 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-config" (OuterVolumeSpecName: "config") pod "c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c" (UID: "c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.366738 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-kube-api-access-x6srs" (OuterVolumeSpecName: "kube-api-access-x6srs") pod "c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c" (UID: "c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c"). InnerVolumeSpecName "kube-api-access-x6srs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.459045 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.459600 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6srs\" (UniqueName: \"kubernetes.io/projected/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c-kube-api-access-x6srs\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.615286 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-f2twl"] Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.675090 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.771677 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daa2f87b-1f8a-423e-88f1-17150ab15ba0-scripts\") pod \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.771749 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-dispersionconf\") pod \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.771804 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/daa2f87b-1f8a-423e-88f1-17150ab15ba0-etc-swift\") pod \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.771852 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/daa2f87b-1f8a-423e-88f1-17150ab15ba0-ring-data-devices\") pod \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.771896 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-swiftconf\") pod \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.772040 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmwf5\" (UniqueName: \"kubernetes.io/projected/daa2f87b-1f8a-423e-88f1-17150ab15ba0-kube-api-access-hmwf5\") pod \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.772219 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-combined-ca-bundle\") pod \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\" (UID: \"daa2f87b-1f8a-423e-88f1-17150ab15ba0\") " Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.773992 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daa2f87b-1f8a-423e-88f1-17150ab15ba0-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "daa2f87b-1f8a-423e-88f1-17150ab15ba0" (UID: "daa2f87b-1f8a-423e-88f1-17150ab15ba0"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.774164 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daa2f87b-1f8a-423e-88f1-17150ab15ba0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "daa2f87b-1f8a-423e-88f1-17150ab15ba0" (UID: "daa2f87b-1f8a-423e-88f1-17150ab15ba0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.780614 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daa2f87b-1f8a-423e-88f1-17150ab15ba0-kube-api-access-hmwf5" (OuterVolumeSpecName: "kube-api-access-hmwf5") pod "daa2f87b-1f8a-423e-88f1-17150ab15ba0" (UID: "daa2f87b-1f8a-423e-88f1-17150ab15ba0"). InnerVolumeSpecName "kube-api-access-hmwf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.785361 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "daa2f87b-1f8a-423e-88f1-17150ab15ba0" (UID: "daa2f87b-1f8a-423e-88f1-17150ab15ba0"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.796740 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "daa2f87b-1f8a-423e-88f1-17150ab15ba0" (UID: "daa2f87b-1f8a-423e-88f1-17150ab15ba0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.798685 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "daa2f87b-1f8a-423e-88f1-17150ab15ba0" (UID: "daa2f87b-1f8a-423e-88f1-17150ab15ba0"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.801107 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daa2f87b-1f8a-423e-88f1-17150ab15ba0-scripts" (OuterVolumeSpecName: "scripts") pod "daa2f87b-1f8a-423e-88f1-17150ab15ba0" (UID: "daa2f87b-1f8a-423e-88f1-17150ab15ba0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:57 crc kubenswrapper[4877]: W1211 18:16:57.857672 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50d772db_0b17_4c84_b0b9_29deb9a368f2.slice/crio-df5e329152bd911572b61ef54fd501943cd359332cc52c97d7b23c65e60e625e WatchSource:0}: Error finding container df5e329152bd911572b61ef54fd501943cd359332cc52c97d7b23c65e60e625e: Status 404 returned error can't find the container with id df5e329152bd911572b61ef54fd501943cd359332cc52c97d7b23c65e60e625e Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.859568 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-k8mn8"] Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.874207 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.874260 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/daa2f87b-1f8a-423e-88f1-17150ab15ba0-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.874270 4877 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.874281 4877 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/daa2f87b-1f8a-423e-88f1-17150ab15ba0-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.874293 4877 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/daa2f87b-1f8a-423e-88f1-17150ab15ba0-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.874303 4877 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/daa2f87b-1f8a-423e-88f1-17150ab15ba0-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:57 crc kubenswrapper[4877]: I1211 18:16:57.874313 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmwf5\" (UniqueName: \"kubernetes.io/projected/daa2f87b-1f8a-423e-88f1-17150ab15ba0-kube-api-access-hmwf5\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.163938 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vdnmw" event={"ID":"daa2f87b-1f8a-423e-88f1-17150ab15ba0","Type":"ContainerDied","Data":"d7c9c185813595999b734107bad02482ddf85f1be90fe05b55b7b233758a13ef"} Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.164290 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c9c185813595999b734107bad02482ddf85f1be90fe05b55b7b233758a13ef" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.163979 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vdnmw" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.166095 4877 generic.go:334] "Generic (PLEG): container finished" podID="50d772db-0b17-4c84-b0b9-29deb9a368f2" containerID="3944239f64e39309944c139005f39764b5eabeb914d0d144dd8bb389487522cc" exitCode=0 Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.166171 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-k8mn8" event={"ID":"50d772db-0b17-4c84-b0b9-29deb9a368f2","Type":"ContainerDied","Data":"3944239f64e39309944c139005f39764b5eabeb914d0d144dd8bb389487522cc"} Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.166212 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-k8mn8" event={"ID":"50d772db-0b17-4c84-b0b9-29deb9a368f2","Type":"ContainerStarted","Data":"df5e329152bd911572b61ef54fd501943cd359332cc52c97d7b23c65e60e625e"} Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.171019 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde","Type":"ContainerStarted","Data":"43eee48ddb14944381ff505d3d6578d75a41018020fa61b200eecc8bd8083dfd"} Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.173922 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-f2twl" event={"ID":"7600f73e-c321-4ec0-af52-684b7b75ec9f","Type":"ContainerStarted","Data":"68d013984a1eeb348372ecdbe72182235e6817816d193a6110b3978cbe11387f"} Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.173993 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-f2twl" event={"ID":"7600f73e-c321-4ec0-af52-684b7b75ec9f","Type":"ContainerStarted","Data":"8f901e135ede80e1c7fce3629b5f1d507f8cc1e66e3803d83f8cb006eee1e432"} Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.175854 4877 generic.go:334] "Generic (PLEG): container finished" podID="9bd1dca0-c7b0-4a84-893a-73db70d80919" containerID="4b764e2622b2f237a20ae7d1465cf185b8d6427ccf5672b753062307a9c67f46" exitCode=0 Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.176134 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4297-account-create-update-kkkdm" event={"ID":"9bd1dca0-c7b0-4a84-893a-73db70d80919","Type":"ContainerDied","Data":"4b764e2622b2f237a20ae7d1465cf185b8d6427ccf5672b753062307a9c67f46"} Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.176272 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-xlsmm" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.252262 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-f2twl" podStartSLOduration=2.252239082 podStartE2EDuration="2.252239082s" podCreationTimestamp="2025-12-11 18:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:16:58.245933564 +0000 UTC m=+979.272177648" watchObservedRunningTime="2025-12-11 18:16:58.252239082 +0000 UTC m=+979.278483126" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.344984 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=37.184128817 podStartE2EDuration="45.344955746s" podCreationTimestamp="2025-12-11 18:16:13 +0000 UTC" firstStartedPulling="2025-12-11 18:16:17.86793104 +0000 UTC m=+938.894175084" lastFinishedPulling="2025-12-11 18:16:26.028757969 +0000 UTC m=+947.055002013" observedRunningTime="2025-12-11 18:16:58.275333788 +0000 UTC m=+979.301577842" watchObservedRunningTime="2025-12-11 18:16:58.344955746 +0000 UTC m=+979.371199790" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.381447 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-xlsmm"] Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.391742 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-xlsmm"] Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.541275 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 11 18:16:58 crc kubenswrapper[4877]: E1211 18:16:58.543429 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daa2f87b-1f8a-423e-88f1-17150ab15ba0" containerName="swift-ring-rebalance" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.543447 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="daa2f87b-1f8a-423e-88f1-17150ab15ba0" containerName="swift-ring-rebalance" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.543723 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="daa2f87b-1f8a-423e-88f1-17150ab15ba0" containerName="swift-ring-rebalance" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.544754 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.551152 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.551314 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.551548 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.554252 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-r2df4" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.572249 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.690017 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-scripts\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.690152 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.690208 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wvfp\" (UniqueName: \"kubernetes.io/projected/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-kube-api-access-9wvfp\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.690227 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.690253 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.690274 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.690297 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-config\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.791909 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.791986 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wvfp\" (UniqueName: \"kubernetes.io/projected/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-kube-api-access-9wvfp\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.792012 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.792034 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.792056 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.792082 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-config\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.792106 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-scripts\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.793071 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-scripts\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.794095 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.794538 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-config\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.798860 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.799392 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.799426 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.824047 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wvfp\" (UniqueName: \"kubernetes.io/projected/c38d6b86-ecb2-47de-a1f3-6670ff0eb78b-kube-api-access-9wvfp\") pod \"ovn-northd-0\" (UID: \"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b\") " pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.852129 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3b79-account-create-update-7278w" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.857251 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zhlnf" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.868362 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.895222 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4297-account-create-update-kkkdm" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.921065 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-88d2-account-create-update-npshn" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.926024 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6whzf" Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.999108 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d31530ff-c24d-4d0b-a197-4a1d1b638990-operator-scripts\") pod \"d31530ff-c24d-4d0b-a197-4a1d1b638990\" (UID: \"d31530ff-c24d-4d0b-a197-4a1d1b638990\") " Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.999636 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbvw6\" (UniqueName: \"kubernetes.io/projected/d31530ff-c24d-4d0b-a197-4a1d1b638990-kube-api-access-pbvw6\") pod \"d31530ff-c24d-4d0b-a197-4a1d1b638990\" (UID: \"d31530ff-c24d-4d0b-a197-4a1d1b638990\") " Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.999671 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00780885-fa23-4d5f-a7f2-b5fb9bb5add9-operator-scripts\") pod \"00780885-fa23-4d5f-a7f2-b5fb9bb5add9\" (UID: \"00780885-fa23-4d5f-a7f2-b5fb9bb5add9\") " Dec 11 18:16:58 crc kubenswrapper[4877]: I1211 18:16:58.999743 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvrlr\" (UniqueName: \"kubernetes.io/projected/00780885-fa23-4d5f-a7f2-b5fb9bb5add9-kube-api-access-pvrlr\") pod \"00780885-fa23-4d5f-a7f2-b5fb9bb5add9\" (UID: \"00780885-fa23-4d5f-a7f2-b5fb9bb5add9\") " Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.000276 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00780885-fa23-4d5f-a7f2-b5fb9bb5add9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00780885-fa23-4d5f-a7f2-b5fb9bb5add9" (UID: "00780885-fa23-4d5f-a7f2-b5fb9bb5add9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.000344 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31530ff-c24d-4d0b-a197-4a1d1b638990-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d31530ff-c24d-4d0b-a197-4a1d1b638990" (UID: "d31530ff-c24d-4d0b-a197-4a1d1b638990"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.000887 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da28a6a2-f437-44c5-9fb3-60cdbe0523d9-operator-scripts\") pod \"da28a6a2-f437-44c5-9fb3-60cdbe0523d9\" (UID: \"da28a6a2-f437-44c5-9fb3-60cdbe0523d9\") " Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.000921 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3617227-9338-4854-b9bd-dc2416275371-operator-scripts\") pod \"e3617227-9338-4854-b9bd-dc2416275371\" (UID: \"e3617227-9338-4854-b9bd-dc2416275371\") " Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.001058 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd1dca0-c7b0-4a84-893a-73db70d80919-operator-scripts\") pod \"9bd1dca0-c7b0-4a84-893a-73db70d80919\" (UID: \"9bd1dca0-c7b0-4a84-893a-73db70d80919\") " Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.001192 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhc8x\" (UniqueName: \"kubernetes.io/projected/9bd1dca0-c7b0-4a84-893a-73db70d80919-kube-api-access-vhc8x\") pod \"9bd1dca0-c7b0-4a84-893a-73db70d80919\" (UID: \"9bd1dca0-c7b0-4a84-893a-73db70d80919\") " Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.001226 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7mh7\" (UniqueName: \"kubernetes.io/projected/da28a6a2-f437-44c5-9fb3-60cdbe0523d9-kube-api-access-m7mh7\") pod \"da28a6a2-f437-44c5-9fb3-60cdbe0523d9\" (UID: \"da28a6a2-f437-44c5-9fb3-60cdbe0523d9\") " Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.001281 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lfj6\" (UniqueName: \"kubernetes.io/projected/e3617227-9338-4854-b9bd-dc2416275371-kube-api-access-8lfj6\") pod \"e3617227-9338-4854-b9bd-dc2416275371\" (UID: \"e3617227-9338-4854-b9bd-dc2416275371\") " Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.001879 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d31530ff-c24d-4d0b-a197-4a1d1b638990-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.001899 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00780885-fa23-4d5f-a7f2-b5fb9bb5add9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.001989 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bd1dca0-c7b0-4a84-893a-73db70d80919-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9bd1dca0-c7b0-4a84-893a-73db70d80919" (UID: "9bd1dca0-c7b0-4a84-893a-73db70d80919"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.002094 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da28a6a2-f437-44c5-9fb3-60cdbe0523d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da28a6a2-f437-44c5-9fb3-60cdbe0523d9" (UID: "da28a6a2-f437-44c5-9fb3-60cdbe0523d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.002483 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3617227-9338-4854-b9bd-dc2416275371-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3617227-9338-4854-b9bd-dc2416275371" (UID: "e3617227-9338-4854-b9bd-dc2416275371"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.008072 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00780885-fa23-4d5f-a7f2-b5fb9bb5add9-kube-api-access-pvrlr" (OuterVolumeSpecName: "kube-api-access-pvrlr") pod "00780885-fa23-4d5f-a7f2-b5fb9bb5add9" (UID: "00780885-fa23-4d5f-a7f2-b5fb9bb5add9"). InnerVolumeSpecName "kube-api-access-pvrlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.008122 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da28a6a2-f437-44c5-9fb3-60cdbe0523d9-kube-api-access-m7mh7" (OuterVolumeSpecName: "kube-api-access-m7mh7") pod "da28a6a2-f437-44c5-9fb3-60cdbe0523d9" (UID: "da28a6a2-f437-44c5-9fb3-60cdbe0523d9"). InnerVolumeSpecName "kube-api-access-m7mh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.008183 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3617227-9338-4854-b9bd-dc2416275371-kube-api-access-8lfj6" (OuterVolumeSpecName: "kube-api-access-8lfj6") pod "e3617227-9338-4854-b9bd-dc2416275371" (UID: "e3617227-9338-4854-b9bd-dc2416275371"). InnerVolumeSpecName "kube-api-access-8lfj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.010413 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d31530ff-c24d-4d0b-a197-4a1d1b638990-kube-api-access-pbvw6" (OuterVolumeSpecName: "kube-api-access-pbvw6") pod "d31530ff-c24d-4d0b-a197-4a1d1b638990" (UID: "d31530ff-c24d-4d0b-a197-4a1d1b638990"). InnerVolumeSpecName "kube-api-access-pbvw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.012685 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd1dca0-c7b0-4a84-893a-73db70d80919-kube-api-access-vhc8x" (OuterVolumeSpecName: "kube-api-access-vhc8x") pod "9bd1dca0-c7b0-4a84-893a-73db70d80919" (UID: "9bd1dca0-c7b0-4a84-893a-73db70d80919"). InnerVolumeSpecName "kube-api-access-vhc8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.019729 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4l7l6" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.103996 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63d99b5-f6da-4796-a4a6-5b299fc7b55d-operator-scripts\") pod \"e63d99b5-f6da-4796-a4a6-5b299fc7b55d\" (UID: \"e63d99b5-f6da-4796-a4a6-5b299fc7b55d\") " Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.104246 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgd7h\" (UniqueName: \"kubernetes.io/projected/e63d99b5-f6da-4796-a4a6-5b299fc7b55d-kube-api-access-sgd7h\") pod \"e63d99b5-f6da-4796-a4a6-5b299fc7b55d\" (UID: \"e63d99b5-f6da-4796-a4a6-5b299fc7b55d\") " Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.104756 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e63d99b5-f6da-4796-a4a6-5b299fc7b55d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e63d99b5-f6da-4796-a4a6-5b299fc7b55d" (UID: "e63d99b5-f6da-4796-a4a6-5b299fc7b55d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.105531 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhc8x\" (UniqueName: \"kubernetes.io/projected/9bd1dca0-c7b0-4a84-893a-73db70d80919-kube-api-access-vhc8x\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.105609 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7mh7\" (UniqueName: \"kubernetes.io/projected/da28a6a2-f437-44c5-9fb3-60cdbe0523d9-kube-api-access-m7mh7\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.105625 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lfj6\" (UniqueName: \"kubernetes.io/projected/e3617227-9338-4854-b9bd-dc2416275371-kube-api-access-8lfj6\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.105638 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbvw6\" (UniqueName: \"kubernetes.io/projected/d31530ff-c24d-4d0b-a197-4a1d1b638990-kube-api-access-pbvw6\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.105701 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e63d99b5-f6da-4796-a4a6-5b299fc7b55d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.105717 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvrlr\" (UniqueName: \"kubernetes.io/projected/00780885-fa23-4d5f-a7f2-b5fb9bb5add9-kube-api-access-pvrlr\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.105730 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da28a6a2-f437-44c5-9fb3-60cdbe0523d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.105783 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3617227-9338-4854-b9bd-dc2416275371-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.105800 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bd1dca0-c7b0-4a84-893a-73db70d80919-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.110881 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63d99b5-f6da-4796-a4a6-5b299fc7b55d-kube-api-access-sgd7h" (OuterVolumeSpecName: "kube-api-access-sgd7h") pod "e63d99b5-f6da-4796-a4a6-5b299fc7b55d" (UID: "e63d99b5-f6da-4796-a4a6-5b299fc7b55d"). InnerVolumeSpecName "kube-api-access-sgd7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.208786 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgd7h\" (UniqueName: \"kubernetes.io/projected/e63d99b5-f6da-4796-a4a6-5b299fc7b55d-kube-api-access-sgd7h\") on node \"crc\" DevicePath \"\"" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.213172 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-k8mn8" event={"ID":"50d772db-0b17-4c84-b0b9-29deb9a368f2","Type":"ContainerStarted","Data":"38e03a991b1e58fcd21737abead67fc68512c2e76c0adbd8d6305f5f91ef11d3"} Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.214197 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.217794 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zhlnf" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.220026 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3b79-account-create-update-7278w" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.222998 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6whzf" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.240191 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4297-account-create-update-kkkdm" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.246986 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-k8mn8" podStartSLOduration=3.246967798 podStartE2EDuration="3.246967798s" podCreationTimestamp="2025-12-11 18:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:16:59.23729602 +0000 UTC m=+980.263540084" watchObservedRunningTime="2025-12-11 18:16:59.246967798 +0000 UTC m=+980.273211842" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.250018 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c" path="/var/lib/kubelet/pods/c2c5fa40-93e8-4314-8f65-ef9f3c5d2e7c/volumes" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.250501 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4l7l6" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.253368 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-88d2-account-create-update-npshn" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.253894 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zhlnf" event={"ID":"e3617227-9338-4854-b9bd-dc2416275371","Type":"ContainerDied","Data":"93cc8403a0805452b1ad946f5d7f0f2057cdfb39767ef13279cdd524b0f2570f"} Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.253928 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93cc8403a0805452b1ad946f5d7f0f2057cdfb39767ef13279cdd524b0f2570f" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.253940 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3b79-account-create-update-7278w" event={"ID":"00780885-fa23-4d5f-a7f2-b5fb9bb5add9","Type":"ContainerDied","Data":"8ec779e4a24ab6f6d789589ed7dd47e2994ab7d7bf06cc91c2da6d2ad0af137e"} Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.253951 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ec779e4a24ab6f6d789589ed7dd47e2994ab7d7bf06cc91c2da6d2ad0af137e" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.253968 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6whzf" event={"ID":"d31530ff-c24d-4d0b-a197-4a1d1b638990","Type":"ContainerDied","Data":"60465b6a3e24d2d6f4e35df9d25167094b9753bf1afeb6d41eb18d43222452ee"} Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.253979 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60465b6a3e24d2d6f4e35df9d25167094b9753bf1afeb6d41eb18d43222452ee" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.253989 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4297-account-create-update-kkkdm" event={"ID":"9bd1dca0-c7b0-4a84-893a-73db70d80919","Type":"ContainerDied","Data":"d77a0bdd255da114661bc756cd4883ca783f55d0175de47a9d38245b79354584"} Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.254001 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d77a0bdd255da114661bc756cd4883ca783f55d0175de47a9d38245b79354584" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.254010 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4l7l6" event={"ID":"e63d99b5-f6da-4796-a4a6-5b299fc7b55d","Type":"ContainerDied","Data":"644c31b037f2ff4db37493f50abe7f4addb655ea3bdba9189f79336ffcff5125"} Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.254020 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="644c31b037f2ff4db37493f50abe7f4addb655ea3bdba9189f79336ffcff5125" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.254029 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-88d2-account-create-update-npshn" event={"ID":"da28a6a2-f437-44c5-9fb3-60cdbe0523d9","Type":"ContainerDied","Data":"b72077b9bb2665bce4b48cc3ceced2d4920106ff9b65e83556ed32fc0fed88bb"} Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.254038 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b72077b9bb2665bce4b48cc3ceced2d4920106ff9b65e83556ed32fc0fed88bb" Dec 11 18:16:59 crc kubenswrapper[4877]: I1211 18:16:59.407125 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 18:16:59 crc kubenswrapper[4877]: W1211 18:16:59.412133 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc38d6b86_ecb2_47de_a1f3_6670ff0eb78b.slice/crio-7704dca5332e4663847fbbf4faaecddc76dbab8ccb461ca16b6e8869231a0022 WatchSource:0}: Error finding container 7704dca5332e4663847fbbf4faaecddc76dbab8ccb461ca16b6e8869231a0022: Status 404 returned error can't find the container with id 7704dca5332e4663847fbbf4faaecddc76dbab8ccb461ca16b6e8869231a0022 Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.269916 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b","Type":"ContainerStarted","Data":"7704dca5332e4663847fbbf4faaecddc76dbab8ccb461ca16b6e8869231a0022"} Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.777087 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-7hhdv"] Dec 11 18:17:00 crc kubenswrapper[4877]: E1211 18:17:00.777557 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3617227-9338-4854-b9bd-dc2416275371" containerName="mariadb-database-create" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.777579 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3617227-9338-4854-b9bd-dc2416275371" containerName="mariadb-database-create" Dec 11 18:17:00 crc kubenswrapper[4877]: E1211 18:17:00.777612 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da28a6a2-f437-44c5-9fb3-60cdbe0523d9" containerName="mariadb-account-create-update" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.777621 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="da28a6a2-f437-44c5-9fb3-60cdbe0523d9" containerName="mariadb-account-create-update" Dec 11 18:17:00 crc kubenswrapper[4877]: E1211 18:17:00.777643 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63d99b5-f6da-4796-a4a6-5b299fc7b55d" containerName="mariadb-database-create" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.777651 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63d99b5-f6da-4796-a4a6-5b299fc7b55d" containerName="mariadb-database-create" Dec 11 18:17:00 crc kubenswrapper[4877]: E1211 18:17:00.777670 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d31530ff-c24d-4d0b-a197-4a1d1b638990" containerName="mariadb-database-create" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.777677 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d31530ff-c24d-4d0b-a197-4a1d1b638990" containerName="mariadb-database-create" Dec 11 18:17:00 crc kubenswrapper[4877]: E1211 18:17:00.777685 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd1dca0-c7b0-4a84-893a-73db70d80919" containerName="mariadb-account-create-update" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.777692 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd1dca0-c7b0-4a84-893a-73db70d80919" containerName="mariadb-account-create-update" Dec 11 18:17:00 crc kubenswrapper[4877]: E1211 18:17:00.777699 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00780885-fa23-4d5f-a7f2-b5fb9bb5add9" containerName="mariadb-account-create-update" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.777705 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="00780885-fa23-4d5f-a7f2-b5fb9bb5add9" containerName="mariadb-account-create-update" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.777907 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="e63d99b5-f6da-4796-a4a6-5b299fc7b55d" containerName="mariadb-database-create" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.777928 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="00780885-fa23-4d5f-a7f2-b5fb9bb5add9" containerName="mariadb-account-create-update" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.777938 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd1dca0-c7b0-4a84-893a-73db70d80919" containerName="mariadb-account-create-update" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.777954 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="d31530ff-c24d-4d0b-a197-4a1d1b638990" containerName="mariadb-database-create" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.777971 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3617227-9338-4854-b9bd-dc2416275371" containerName="mariadb-database-create" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.777990 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="da28a6a2-f437-44c5-9fb3-60cdbe0523d9" containerName="mariadb-account-create-update" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.778864 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.782072 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.782196 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-9r6gv" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.849633 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vrt4\" (UniqueName: \"kubernetes.io/projected/9b58d9e9-69e6-42e8-86eb-538ac26c6340-kube-api-access-5vrt4\") pod \"glance-db-sync-7hhdv\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.850090 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-combined-ca-bundle\") pod \"glance-db-sync-7hhdv\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.850115 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-config-data\") pod \"glance-db-sync-7hhdv\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.850136 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-db-sync-config-data\") pod \"glance-db-sync-7hhdv\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.854499 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7hhdv"] Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.882053 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zdz6c" podUID="efc5ef2c-fcea-4de5-a085-47ff35a33522" containerName="ovn-controller" probeResult="failure" output=< Dec 11 18:17:00 crc kubenswrapper[4877]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 11 18:17:00 crc kubenswrapper[4877]: > Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.907074 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.910760 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6g4mt" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.951571 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-combined-ca-bundle\") pod \"glance-db-sync-7hhdv\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.951652 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-config-data\") pod \"glance-db-sync-7hhdv\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.951685 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-db-sync-config-data\") pod \"glance-db-sync-7hhdv\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.951827 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vrt4\" (UniqueName: \"kubernetes.io/projected/9b58d9e9-69e6-42e8-86eb-538ac26c6340-kube-api-access-5vrt4\") pod \"glance-db-sync-7hhdv\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.956900 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-db-sync-config-data\") pod \"glance-db-sync-7hhdv\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.959124 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-config-data\") pod \"glance-db-sync-7hhdv\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.963136 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-combined-ca-bundle\") pod \"glance-db-sync-7hhdv\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:00 crc kubenswrapper[4877]: I1211 18:17:00.971752 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vrt4\" (UniqueName: \"kubernetes.io/projected/9b58d9e9-69e6-42e8-86eb-538ac26c6340-kube-api-access-5vrt4\") pod \"glance-db-sync-7hhdv\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.148887 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zdz6c-config-ll6gq"] Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.150149 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.155062 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.164124 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.171556 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zdz6c-config-ll6gq"] Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.256570 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-run-ovn\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.257526 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0aae5967-0952-44c4-ae3a-469e965066af-additional-scripts\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.258442 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0aae5967-0952-44c4-ae3a-469e965066af-scripts\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.258614 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr6d4\" (UniqueName: \"kubernetes.io/projected/0aae5967-0952-44c4-ae3a-469e965066af-kube-api-access-vr6d4\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.258841 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-log-ovn\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.259486 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-run\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.282219 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b","Type":"ContainerStarted","Data":"494939f547f7121f8a98ab1ea868f0b46fbd4f4ec39c738bee51f40185ff1b54"} Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.284725 4877 generic.go:334] "Generic (PLEG): container finished" podID="18fc032b-7957-4e94-929a-47c04d67b45f" containerID="3baa3250be1097a6dbd911daa5681626082cfbbb7bd7aa304ec88f6703b2465b" exitCode=0 Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.284763 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18fc032b-7957-4e94-929a-47c04d67b45f","Type":"ContainerDied","Data":"3baa3250be1097a6dbd911daa5681626082cfbbb7bd7aa304ec88f6703b2465b"} Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.289430 4877 generic.go:334] "Generic (PLEG): container finished" podID="d003258a-8e88-4f72-b82b-2367c81bd081" containerID="95c675d59aed12e33d0706bc4e929f1ce4eddd8ca6bfbbd7d02db9b1d954eee9" exitCode=0 Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.289817 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d003258a-8e88-4f72-b82b-2367c81bd081","Type":"ContainerDied","Data":"95c675d59aed12e33d0706bc4e929f1ce4eddd8ca6bfbbd7d02db9b1d954eee9"} Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.366239 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-log-ovn\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.366795 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-run\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.366868 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-run-ovn\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.366961 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0aae5967-0952-44c4-ae3a-469e965066af-additional-scripts\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.367027 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0aae5967-0952-44c4-ae3a-469e965066af-scripts\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.367097 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr6d4\" (UniqueName: \"kubernetes.io/projected/0aae5967-0952-44c4-ae3a-469e965066af-kube-api-access-vr6d4\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.368992 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-run-ovn\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.373656 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-log-ovn\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.373734 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0aae5967-0952-44c4-ae3a-469e965066af-additional-scripts\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.374003 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-run\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.376797 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0aae5967-0952-44c4-ae3a-469e965066af-scripts\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.388018 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr6d4\" (UniqueName: \"kubernetes.io/projected/0aae5967-0952-44c4-ae3a-469e965066af-kube-api-access-vr6d4\") pod \"ovn-controller-zdz6c-config-ll6gq\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.466171 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.834404 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7hhdv"] Dec 11 18:17:01 crc kubenswrapper[4877]: I1211 18:17:01.993797 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zdz6c-config-ll6gq"] Dec 11 18:17:02 crc kubenswrapper[4877]: I1211 18:17:02.301886 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zdz6c-config-ll6gq" event={"ID":"0aae5967-0952-44c4-ae3a-469e965066af","Type":"ContainerStarted","Data":"a2bc08b1079784d947469e9e5fb217e76d3f4bc9c6411667e9b77690be33cb72"} Dec 11 18:17:02 crc kubenswrapper[4877]: I1211 18:17:02.303359 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7hhdv" event={"ID":"9b58d9e9-69e6-42e8-86eb-538ac26c6340","Type":"ContainerStarted","Data":"d5fef6ed2419b6aaaaf41fb542b33b6cdaa206ca0a9e1ca453e6f285c0a3285c"} Dec 11 18:17:02 crc kubenswrapper[4877]: I1211 18:17:02.306269 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d003258a-8e88-4f72-b82b-2367c81bd081","Type":"ContainerStarted","Data":"6160982ecda03781a52a5c0cadbadd8f66258f7122dc55186e70d738a7b5c296"} Dec 11 18:17:02 crc kubenswrapper[4877]: I1211 18:17:02.306590 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:17:02 crc kubenswrapper[4877]: I1211 18:17:02.308334 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c38d6b86-ecb2-47de-a1f3-6670ff0eb78b","Type":"ContainerStarted","Data":"13b5d04a030665cee535848bcc226ce2e796bbd63ab322019d4ab3cd79ed8935"} Dec 11 18:17:02 crc kubenswrapper[4877]: I1211 18:17:02.308551 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 11 18:17:02 crc kubenswrapper[4877]: I1211 18:17:02.311418 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18fc032b-7957-4e94-929a-47c04d67b45f","Type":"ContainerStarted","Data":"bd307b31e54f3db61b981ba2623680bb2e45b8dd2aca2f78104e494f47ce1780"} Dec 11 18:17:02 crc kubenswrapper[4877]: I1211 18:17:02.311770 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 11 18:17:02 crc kubenswrapper[4877]: I1211 18:17:02.338239 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.45651628 podStartE2EDuration="1m1.338213094s" podCreationTimestamp="2025-12-11 18:16:01 +0000 UTC" firstStartedPulling="2025-12-11 18:16:15.566152342 +0000 UTC m=+936.592396426" lastFinishedPulling="2025-12-11 18:16:25.447849196 +0000 UTC m=+946.474093240" observedRunningTime="2025-12-11 18:17:02.330403256 +0000 UTC m=+983.356647330" watchObservedRunningTime="2025-12-11 18:17:02.338213094 +0000 UTC m=+983.364457148" Dec 11 18:17:02 crc kubenswrapper[4877]: I1211 18:17:02.354625 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.883682276 podStartE2EDuration="4.354597341s" podCreationTimestamp="2025-12-11 18:16:58 +0000 UTC" firstStartedPulling="2025-12-11 18:16:59.415786383 +0000 UTC m=+980.442030437" lastFinishedPulling="2025-12-11 18:17:00.886701458 +0000 UTC m=+981.912945502" observedRunningTime="2025-12-11 18:17:02.34741736 +0000 UTC m=+983.373661424" watchObservedRunningTime="2025-12-11 18:17:02.354597341 +0000 UTC m=+983.380841395" Dec 11 18:17:02 crc kubenswrapper[4877]: I1211 18:17:02.380822 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.409980165 podStartE2EDuration="1m2.380792141s" podCreationTimestamp="2025-12-11 18:16:00 +0000 UTC" firstStartedPulling="2025-12-11 18:16:16.61042567 +0000 UTC m=+937.636669714" lastFinishedPulling="2025-12-11 18:16:25.581237646 +0000 UTC m=+946.607481690" observedRunningTime="2025-12-11 18:17:02.372186461 +0000 UTC m=+983.398430515" watchObservedRunningTime="2025-12-11 18:17:02.380792141 +0000 UTC m=+983.407036185" Dec 11 18:17:03 crc kubenswrapper[4877]: I1211 18:17:03.322830 4877 generic.go:334] "Generic (PLEG): container finished" podID="0aae5967-0952-44c4-ae3a-469e965066af" containerID="b770f3a72015a98d6a82bed7cef9f486e977273d5998ace9ac26576c8738edf3" exitCode=0 Dec 11 18:17:03 crc kubenswrapper[4877]: I1211 18:17:03.323477 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zdz6c-config-ll6gq" event={"ID":"0aae5967-0952-44c4-ae3a-469e965066af","Type":"ContainerDied","Data":"b770f3a72015a98d6a82bed7cef9f486e977273d5998ace9ac26576c8738edf3"} Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.696054 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.755977 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-run-ovn\") pod \"0aae5967-0952-44c4-ae3a-469e965066af\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.756098 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-log-ovn\") pod \"0aae5967-0952-44c4-ae3a-469e965066af\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.756141 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0aae5967-0952-44c4-ae3a-469e965066af-scripts\") pod \"0aae5967-0952-44c4-ae3a-469e965066af\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.756141 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0aae5967-0952-44c4-ae3a-469e965066af" (UID: "0aae5967-0952-44c4-ae3a-469e965066af"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.756175 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-run\") pod \"0aae5967-0952-44c4-ae3a-469e965066af\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.756206 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0aae5967-0952-44c4-ae3a-469e965066af" (UID: "0aae5967-0952-44c4-ae3a-469e965066af"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.756305 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0aae5967-0952-44c4-ae3a-469e965066af-additional-scripts\") pod \"0aae5967-0952-44c4-ae3a-469e965066af\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.756368 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr6d4\" (UniqueName: \"kubernetes.io/projected/0aae5967-0952-44c4-ae3a-469e965066af-kube-api-access-vr6d4\") pod \"0aae5967-0952-44c4-ae3a-469e965066af\" (UID: \"0aae5967-0952-44c4-ae3a-469e965066af\") " Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.756351 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-run" (OuterVolumeSpecName: "var-run") pod "0aae5967-0952-44c4-ae3a-469e965066af" (UID: "0aae5967-0952-44c4-ae3a-469e965066af"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.756805 4877 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.756820 4877 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-run\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.756831 4877 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0aae5967-0952-44c4-ae3a-469e965066af-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.757469 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aae5967-0952-44c4-ae3a-469e965066af-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0aae5967-0952-44c4-ae3a-469e965066af" (UID: "0aae5967-0952-44c4-ae3a-469e965066af"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.760551 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aae5967-0952-44c4-ae3a-469e965066af-scripts" (OuterVolumeSpecName: "scripts") pod "0aae5967-0952-44c4-ae3a-469e965066af" (UID: "0aae5967-0952-44c4-ae3a-469e965066af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.766505 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aae5967-0952-44c4-ae3a-469e965066af-kube-api-access-vr6d4" (OuterVolumeSpecName: "kube-api-access-vr6d4") pod "0aae5967-0952-44c4-ae3a-469e965066af" (UID: "0aae5967-0952-44c4-ae3a-469e965066af"). InnerVolumeSpecName "kube-api-access-vr6d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.859130 4877 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0aae5967-0952-44c4-ae3a-469e965066af-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.859182 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr6d4\" (UniqueName: \"kubernetes.io/projected/0aae5967-0952-44c4-ae3a-469e965066af-kube-api-access-vr6d4\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:04 crc kubenswrapper[4877]: I1211 18:17:04.859196 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0aae5967-0952-44c4-ae3a-469e965066af-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.343484 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zdz6c-config-ll6gq" event={"ID":"0aae5967-0952-44c4-ae3a-469e965066af","Type":"ContainerDied","Data":"a2bc08b1079784d947469e9e5fb217e76d3f4bc9c6411667e9b77690be33cb72"} Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.344033 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2bc08b1079784d947469e9e5fb217e76d3f4bc9c6411667e9b77690be33cb72" Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.343652 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zdz6c-config-ll6gq" Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.827280 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zdz6c-config-ll6gq"] Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.835036 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zdz6c-config-ll6gq"] Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.869547 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-zdz6c" Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.930264 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zdz6c-config-4ngh6"] Dec 11 18:17:05 crc kubenswrapper[4877]: E1211 18:17:05.930660 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aae5967-0952-44c4-ae3a-469e965066af" containerName="ovn-config" Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.930677 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aae5967-0952-44c4-ae3a-469e965066af" containerName="ovn-config" Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.930894 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aae5967-0952-44c4-ae3a-469e965066af" containerName="ovn-config" Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.931531 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.934049 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.947897 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zdz6c-config-4ngh6"] Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.979271 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sxhp\" (UniqueName: \"kubernetes.io/projected/91672b3b-2705-49d0-8905-08768f60ab7b-kube-api-access-2sxhp\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.979337 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-run\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.979388 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91672b3b-2705-49d0-8905-08768f60ab7b-scripts\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.979410 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-run-ovn\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.979556 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91672b3b-2705-49d0-8905-08768f60ab7b-additional-scripts\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:05 crc kubenswrapper[4877]: I1211 18:17:05.979876 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-log-ovn\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:06 crc kubenswrapper[4877]: I1211 18:17:06.081557 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-log-ovn\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:06 crc kubenswrapper[4877]: I1211 18:17:06.081617 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sxhp\" (UniqueName: \"kubernetes.io/projected/91672b3b-2705-49d0-8905-08768f60ab7b-kube-api-access-2sxhp\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:06 crc kubenswrapper[4877]: I1211 18:17:06.081670 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-run\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:06 crc kubenswrapper[4877]: I1211 18:17:06.081704 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91672b3b-2705-49d0-8905-08768f60ab7b-scripts\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:06 crc kubenswrapper[4877]: I1211 18:17:06.081728 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-run-ovn\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:06 crc kubenswrapper[4877]: I1211 18:17:06.081765 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91672b3b-2705-49d0-8905-08768f60ab7b-additional-scripts\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:06 crc kubenswrapper[4877]: I1211 18:17:06.082076 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-run-ovn\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:06 crc kubenswrapper[4877]: I1211 18:17:06.082087 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-run\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:06 crc kubenswrapper[4877]: I1211 18:17:06.082076 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-log-ovn\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:06 crc kubenswrapper[4877]: I1211 18:17:06.083286 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91672b3b-2705-49d0-8905-08768f60ab7b-additional-scripts\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:06 crc kubenswrapper[4877]: I1211 18:17:06.086600 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91672b3b-2705-49d0-8905-08768f60ab7b-scripts\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:06 crc kubenswrapper[4877]: I1211 18:17:06.102172 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sxhp\" (UniqueName: \"kubernetes.io/projected/91672b3b-2705-49d0-8905-08768f60ab7b-kube-api-access-2sxhp\") pod \"ovn-controller-zdz6c-config-4ngh6\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:06 crc kubenswrapper[4877]: I1211 18:17:06.247582 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:06 crc kubenswrapper[4877]: I1211 18:17:06.723249 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zdz6c-config-4ngh6"] Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.235963 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aae5967-0952-44c4-ae3a-469e965066af" path="/var/lib/kubelet/pods/0aae5967-0952-44c4-ae3a-469e965066af/volumes" Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.319758 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.375252 4877 generic.go:334] "Generic (PLEG): container finished" podID="91672b3b-2705-49d0-8905-08768f60ab7b" containerID="3dfb7c565c1c40ab9f1f6b5e3960e84fe74275bf55171dc8ee1e6ee72aa3c019" exitCode=0 Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.375317 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zdz6c-config-4ngh6" event={"ID":"91672b3b-2705-49d0-8905-08768f60ab7b","Type":"ContainerDied","Data":"3dfb7c565c1c40ab9f1f6b5e3960e84fe74275bf55171dc8ee1e6ee72aa3c019"} Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.375349 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zdz6c-config-4ngh6" event={"ID":"91672b3b-2705-49d0-8905-08768f60ab7b","Type":"ContainerStarted","Data":"eb9b0f6d99d5929ad3557f7fb2cf9566d7cdd497096d7c921390fdcd6e57f974"} Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.413964 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nfntl"] Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.414285 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" podUID="c7a32890-1340-4640-b192-526f6c90e72e" containerName="dnsmasq-dns" containerID="cri-o://52a68b777dfc73ace634a14b4367da1ba72971db5dac33c5a162d17e68584a68" gracePeriod=10 Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.809875 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qv9nh"] Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.814543 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.831241 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76e4cfb-c4b9-464c-be7e-440efa73932e-utilities\") pod \"redhat-operators-qv9nh\" (UID: \"d76e4cfb-c4b9-464c-be7e-440efa73932e\") " pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.831304 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76e4cfb-c4b9-464c-be7e-440efa73932e-catalog-content\") pod \"redhat-operators-qv9nh\" (UID: \"d76e4cfb-c4b9-464c-be7e-440efa73932e\") " pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.831363 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwl8c\" (UniqueName: \"kubernetes.io/projected/d76e4cfb-c4b9-464c-be7e-440efa73932e-kube-api-access-zwl8c\") pod \"redhat-operators-qv9nh\" (UID: \"d76e4cfb-c4b9-464c-be7e-440efa73932e\") " pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.837237 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qv9nh"] Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.933194 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76e4cfb-c4b9-464c-be7e-440efa73932e-utilities\") pod \"redhat-operators-qv9nh\" (UID: \"d76e4cfb-c4b9-464c-be7e-440efa73932e\") " pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.933281 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76e4cfb-c4b9-464c-be7e-440efa73932e-catalog-content\") pod \"redhat-operators-qv9nh\" (UID: \"d76e4cfb-c4b9-464c-be7e-440efa73932e\") " pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.933454 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwl8c\" (UniqueName: \"kubernetes.io/projected/d76e4cfb-c4b9-464c-be7e-440efa73932e-kube-api-access-zwl8c\") pod \"redhat-operators-qv9nh\" (UID: \"d76e4cfb-c4b9-464c-be7e-440efa73932e\") " pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.934126 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76e4cfb-c4b9-464c-be7e-440efa73932e-utilities\") pod \"redhat-operators-qv9nh\" (UID: \"d76e4cfb-c4b9-464c-be7e-440efa73932e\") " pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.934871 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76e4cfb-c4b9-464c-be7e-440efa73932e-catalog-content\") pod \"redhat-operators-qv9nh\" (UID: \"d76e4cfb-c4b9-464c-be7e-440efa73932e\") " pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:07 crc kubenswrapper[4877]: I1211 18:17:07.960038 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwl8c\" (UniqueName: \"kubernetes.io/projected/d76e4cfb-c4b9-464c-be7e-440efa73932e-kube-api-access-zwl8c\") pod \"redhat-operators-qv9nh\" (UID: \"d76e4cfb-c4b9-464c-be7e-440efa73932e\") " pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.002630 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.035536 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlkbq\" (UniqueName: \"kubernetes.io/projected/c7a32890-1340-4640-b192-526f6c90e72e-kube-api-access-dlkbq\") pod \"c7a32890-1340-4640-b192-526f6c90e72e\" (UID: \"c7a32890-1340-4640-b192-526f6c90e72e\") " Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.035591 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a32890-1340-4640-b192-526f6c90e72e-config\") pod \"c7a32890-1340-4640-b192-526f6c90e72e\" (UID: \"c7a32890-1340-4640-b192-526f6c90e72e\") " Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.035717 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7a32890-1340-4640-b192-526f6c90e72e-dns-svc\") pod \"c7a32890-1340-4640-b192-526f6c90e72e\" (UID: \"c7a32890-1340-4640-b192-526f6c90e72e\") " Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.049756 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a32890-1340-4640-b192-526f6c90e72e-kube-api-access-dlkbq" (OuterVolumeSpecName: "kube-api-access-dlkbq") pod "c7a32890-1340-4640-b192-526f6c90e72e" (UID: "c7a32890-1340-4640-b192-526f6c90e72e"). InnerVolumeSpecName "kube-api-access-dlkbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.078058 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7a32890-1340-4640-b192-526f6c90e72e-config" (OuterVolumeSpecName: "config") pod "c7a32890-1340-4640-b192-526f6c90e72e" (UID: "c7a32890-1340-4640-b192-526f6c90e72e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.089359 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7a32890-1340-4640-b192-526f6c90e72e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7a32890-1340-4640-b192-526f6c90e72e" (UID: "c7a32890-1340-4640-b192-526f6c90e72e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.139816 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlkbq\" (UniqueName: \"kubernetes.io/projected/c7a32890-1340-4640-b192-526f6c90e72e-kube-api-access-dlkbq\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.139864 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a32890-1340-4640-b192-526f6c90e72e-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.139873 4877 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7a32890-1340-4640-b192-526f6c90e72e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.150629 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.388176 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.388638 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" event={"ID":"c7a32890-1340-4640-b192-526f6c90e72e","Type":"ContainerDied","Data":"52a68b777dfc73ace634a14b4367da1ba72971db5dac33c5a162d17e68584a68"} Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.389071 4877 scope.go:117] "RemoveContainer" containerID="52a68b777dfc73ace634a14b4367da1ba72971db5dac33c5a162d17e68584a68" Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.387406 4877 generic.go:334] "Generic (PLEG): container finished" podID="c7a32890-1340-4640-b192-526f6c90e72e" containerID="52a68b777dfc73ace634a14b4367da1ba72971db5dac33c5a162d17e68584a68" exitCode=0 Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.390341 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" event={"ID":"c7a32890-1340-4640-b192-526f6c90e72e","Type":"ContainerDied","Data":"d71524be336f4d3812f54ee2baa92fd5656fb38349bf1b53412c9f0d105ee711"} Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.442714 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nfntl"] Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.453278 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-nfntl"] Dec 11 18:17:08 crc kubenswrapper[4877]: I1211 18:17:08.702967 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qv9nh"] Dec 11 18:17:09 crc kubenswrapper[4877]: I1211 18:17:09.228679 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a32890-1340-4640-b192-526f6c90e72e" path="/var/lib/kubelet/pods/c7a32890-1340-4640-b192-526f6c90e72e/volumes" Dec 11 18:17:10 crc kubenswrapper[4877]: I1211 18:17:10.493092 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:17:10 crc kubenswrapper[4877]: I1211 18:17:10.510864 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c6eb39a0-5f8c-44d1-b27e-c946c850a539-etc-swift\") pod \"swift-storage-0\" (UID: \"c6eb39a0-5f8c-44d1-b27e-c946c850a539\") " pod="openstack/swift-storage-0" Dec 11 18:17:10 crc kubenswrapper[4877]: I1211 18:17:10.628876 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.239640 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.587910 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.619973 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-tmjmw"] Dec 11 18:17:12 crc kubenswrapper[4877]: E1211 18:17:12.620454 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a32890-1340-4640-b192-526f6c90e72e" containerName="dnsmasq-dns" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.620479 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a32890-1340-4640-b192-526f6c90e72e" containerName="dnsmasq-dns" Dec 11 18:17:12 crc kubenswrapper[4877]: E1211 18:17:12.620501 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a32890-1340-4640-b192-526f6c90e72e" containerName="init" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.620510 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a32890-1340-4640-b192-526f6c90e72e" containerName="init" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.620720 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a32890-1340-4640-b192-526f6c90e72e" containerName="dnsmasq-dns" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.621424 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tmjmw" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.649226 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tmjmw"] Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.718737 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6dd3-account-create-update-xwr7s"] Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.719899 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6dd3-account-create-update-xwr7s" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.725316 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.734878 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-rlb72"] Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.736391 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rlb72" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.741616 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb8f7\" (UniqueName: \"kubernetes.io/projected/0c6f25ca-ff23-47cf-99f9-eb8355c546ec-kube-api-access-bb8f7\") pod \"barbican-db-create-tmjmw\" (UID: \"0c6f25ca-ff23-47cf-99f9-eb8355c546ec\") " pod="openstack/barbican-db-create-tmjmw" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.741873 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6f25ca-ff23-47cf-99f9-eb8355c546ec-operator-scripts\") pod \"barbican-db-create-tmjmw\" (UID: \"0c6f25ca-ff23-47cf-99f9-eb8355c546ec\") " pod="openstack/barbican-db-create-tmjmw" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.750503 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6dd3-account-create-update-xwr7s"] Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.768297 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rlb72"] Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.812174 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7cb5889db5-nfntl" podUID="c7a32890-1340-4640-b192-526f6c90e72e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.105:5353: i/o timeout" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.843807 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89c691a5-8b64-4aee-8833-3453f25422ce-operator-scripts\") pod \"cinder-6dd3-account-create-update-xwr7s\" (UID: \"89c691a5-8b64-4aee-8833-3453f25422ce\") " pod="openstack/cinder-6dd3-account-create-update-xwr7s" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.843869 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb8f7\" (UniqueName: \"kubernetes.io/projected/0c6f25ca-ff23-47cf-99f9-eb8355c546ec-kube-api-access-bb8f7\") pod \"barbican-db-create-tmjmw\" (UID: \"0c6f25ca-ff23-47cf-99f9-eb8355c546ec\") " pod="openstack/barbican-db-create-tmjmw" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.843900 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hl5d\" (UniqueName: \"kubernetes.io/projected/8aa10530-60c2-46d2-8a52-7422281745bf-kube-api-access-2hl5d\") pod \"cinder-db-create-rlb72\" (UID: \"8aa10530-60c2-46d2-8a52-7422281745bf\") " pod="openstack/cinder-db-create-rlb72" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.843970 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vjgl\" (UniqueName: \"kubernetes.io/projected/89c691a5-8b64-4aee-8833-3453f25422ce-kube-api-access-9vjgl\") pod \"cinder-6dd3-account-create-update-xwr7s\" (UID: \"89c691a5-8b64-4aee-8833-3453f25422ce\") " pod="openstack/cinder-6dd3-account-create-update-xwr7s" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.844661 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aa10530-60c2-46d2-8a52-7422281745bf-operator-scripts\") pod \"cinder-db-create-rlb72\" (UID: \"8aa10530-60c2-46d2-8a52-7422281745bf\") " pod="openstack/cinder-db-create-rlb72" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.844762 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6f25ca-ff23-47cf-99f9-eb8355c546ec-operator-scripts\") pod \"barbican-db-create-tmjmw\" (UID: \"0c6f25ca-ff23-47cf-99f9-eb8355c546ec\") " pod="openstack/barbican-db-create-tmjmw" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.845668 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6f25ca-ff23-47cf-99f9-eb8355c546ec-operator-scripts\") pod \"barbican-db-create-tmjmw\" (UID: \"0c6f25ca-ff23-47cf-99f9-eb8355c546ec\") " pod="openstack/barbican-db-create-tmjmw" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.881457 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb8f7\" (UniqueName: \"kubernetes.io/projected/0c6f25ca-ff23-47cf-99f9-eb8355c546ec-kube-api-access-bb8f7\") pod \"barbican-db-create-tmjmw\" (UID: \"0c6f25ca-ff23-47cf-99f9-eb8355c546ec\") " pod="openstack/barbican-db-create-tmjmw" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.898496 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-qj6wx"] Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.899774 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qj6wx" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.924466 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c23a-account-create-update-mrnds"] Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.925897 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c23a-account-create-update-mrnds" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.929704 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.946456 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qj6wx"] Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.947522 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aa10530-60c2-46d2-8a52-7422281745bf-operator-scripts\") pod \"cinder-db-create-rlb72\" (UID: \"8aa10530-60c2-46d2-8a52-7422281745bf\") " pod="openstack/cinder-db-create-rlb72" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.947573 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89c691a5-8b64-4aee-8833-3453f25422ce-operator-scripts\") pod \"cinder-6dd3-account-create-update-xwr7s\" (UID: \"89c691a5-8b64-4aee-8833-3453f25422ce\") " pod="openstack/cinder-6dd3-account-create-update-xwr7s" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.947600 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hl5d\" (UniqueName: \"kubernetes.io/projected/8aa10530-60c2-46d2-8a52-7422281745bf-kube-api-access-2hl5d\") pod \"cinder-db-create-rlb72\" (UID: \"8aa10530-60c2-46d2-8a52-7422281745bf\") " pod="openstack/cinder-db-create-rlb72" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.947629 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vjgl\" (UniqueName: \"kubernetes.io/projected/89c691a5-8b64-4aee-8833-3453f25422ce-kube-api-access-9vjgl\") pod \"cinder-6dd3-account-create-update-xwr7s\" (UID: \"89c691a5-8b64-4aee-8833-3453f25422ce\") " pod="openstack/cinder-6dd3-account-create-update-xwr7s" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.947814 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tmjmw" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.953340 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89c691a5-8b64-4aee-8833-3453f25422ce-operator-scripts\") pod \"cinder-6dd3-account-create-update-xwr7s\" (UID: \"89c691a5-8b64-4aee-8833-3453f25422ce\") " pod="openstack/cinder-6dd3-account-create-update-xwr7s" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.953918 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aa10530-60c2-46d2-8a52-7422281745bf-operator-scripts\") pod \"cinder-db-create-rlb72\" (UID: \"8aa10530-60c2-46d2-8a52-7422281745bf\") " pod="openstack/cinder-db-create-rlb72" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.974546 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vjgl\" (UniqueName: \"kubernetes.io/projected/89c691a5-8b64-4aee-8833-3453f25422ce-kube-api-access-9vjgl\") pod \"cinder-6dd3-account-create-update-xwr7s\" (UID: \"89c691a5-8b64-4aee-8833-3453f25422ce\") " pod="openstack/cinder-6dd3-account-create-update-xwr7s" Dec 11 18:17:12 crc kubenswrapper[4877]: I1211 18:17:12.994469 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hl5d\" (UniqueName: \"kubernetes.io/projected/8aa10530-60c2-46d2-8a52-7422281745bf-kube-api-access-2hl5d\") pod \"cinder-db-create-rlb72\" (UID: \"8aa10530-60c2-46d2-8a52-7422281745bf\") " pod="openstack/cinder-db-create-rlb72" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.035949 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6dd3-account-create-update-xwr7s" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.045514 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c23a-account-create-update-mrnds"] Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.049359 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2b2h\" (UniqueName: \"kubernetes.io/projected/83845faf-f287-4962-afff-966bfac050eb-kube-api-access-c2b2h\") pod \"barbican-c23a-account-create-update-mrnds\" (UID: \"83845faf-f287-4962-afff-966bfac050eb\") " pod="openstack/barbican-c23a-account-create-update-mrnds" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.049472 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83845faf-f287-4962-afff-966bfac050eb-operator-scripts\") pod \"barbican-c23a-account-create-update-mrnds\" (UID: \"83845faf-f287-4962-afff-966bfac050eb\") " pod="openstack/barbican-c23a-account-create-update-mrnds" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.049538 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7-operator-scripts\") pod \"neutron-db-create-qj6wx\" (UID: \"eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7\") " pod="openstack/neutron-db-create-qj6wx" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.049578 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dck8h\" (UniqueName: \"kubernetes.io/projected/eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7-kube-api-access-dck8h\") pod \"neutron-db-create-qj6wx\" (UID: \"eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7\") " pod="openstack/neutron-db-create-qj6wx" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.069391 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rlb72" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.092556 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6m6wj"] Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.093960 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6m6wj" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.099425 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.099687 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.099862 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.100739 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4dcm7" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.127489 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6m6wj"] Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.135541 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-895b-account-create-update-rbv8b"] Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.136966 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-895b-account-create-update-rbv8b" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.141197 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.148518 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-895b-account-create-update-rbv8b"] Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.154386 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2b2h\" (UniqueName: \"kubernetes.io/projected/83845faf-f287-4962-afff-966bfac050eb-kube-api-access-c2b2h\") pod \"barbican-c23a-account-create-update-mrnds\" (UID: \"83845faf-f287-4962-afff-966bfac050eb\") " pod="openstack/barbican-c23a-account-create-update-mrnds" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.154430 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea097e48-d917-409a-befa-14d0ba6dc67b-combined-ca-bundle\") pod \"keystone-db-sync-6m6wj\" (UID: \"ea097e48-d917-409a-befa-14d0ba6dc67b\") " pod="openstack/keystone-db-sync-6m6wj" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.154479 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea097e48-d917-409a-befa-14d0ba6dc67b-config-data\") pod \"keystone-db-sync-6m6wj\" (UID: \"ea097e48-d917-409a-befa-14d0ba6dc67b\") " pod="openstack/keystone-db-sync-6m6wj" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.154521 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83845faf-f287-4962-afff-966bfac050eb-operator-scripts\") pod \"barbican-c23a-account-create-update-mrnds\" (UID: \"83845faf-f287-4962-afff-966bfac050eb\") " pod="openstack/barbican-c23a-account-create-update-mrnds" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.154580 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7-operator-scripts\") pod \"neutron-db-create-qj6wx\" (UID: \"eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7\") " pod="openstack/neutron-db-create-qj6wx" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.154617 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dck8h\" (UniqueName: \"kubernetes.io/projected/eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7-kube-api-access-dck8h\") pod \"neutron-db-create-qj6wx\" (UID: \"eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7\") " pod="openstack/neutron-db-create-qj6wx" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.154644 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9pfp\" (UniqueName: \"kubernetes.io/projected/ea097e48-d917-409a-befa-14d0ba6dc67b-kube-api-access-w9pfp\") pod \"keystone-db-sync-6m6wj\" (UID: \"ea097e48-d917-409a-befa-14d0ba6dc67b\") " pod="openstack/keystone-db-sync-6m6wj" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.155779 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83845faf-f287-4962-afff-966bfac050eb-operator-scripts\") pod \"barbican-c23a-account-create-update-mrnds\" (UID: \"83845faf-f287-4962-afff-966bfac050eb\") " pod="openstack/barbican-c23a-account-create-update-mrnds" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.158921 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7-operator-scripts\") pod \"neutron-db-create-qj6wx\" (UID: \"eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7\") " pod="openstack/neutron-db-create-qj6wx" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.189000 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dck8h\" (UniqueName: \"kubernetes.io/projected/eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7-kube-api-access-dck8h\") pod \"neutron-db-create-qj6wx\" (UID: \"eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7\") " pod="openstack/neutron-db-create-qj6wx" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.192684 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2b2h\" (UniqueName: \"kubernetes.io/projected/83845faf-f287-4962-afff-966bfac050eb-kube-api-access-c2b2h\") pod \"barbican-c23a-account-create-update-mrnds\" (UID: \"83845faf-f287-4962-afff-966bfac050eb\") " pod="openstack/barbican-c23a-account-create-update-mrnds" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.249129 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qj6wx" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.257763 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/053e2796-2bef-48ed-a1c2-47917558ad1a-operator-scripts\") pod \"neutron-895b-account-create-update-rbv8b\" (UID: \"053e2796-2bef-48ed-a1c2-47917558ad1a\") " pod="openstack/neutron-895b-account-create-update-rbv8b" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.258191 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm2cq\" (UniqueName: \"kubernetes.io/projected/053e2796-2bef-48ed-a1c2-47917558ad1a-kube-api-access-vm2cq\") pod \"neutron-895b-account-create-update-rbv8b\" (UID: \"053e2796-2bef-48ed-a1c2-47917558ad1a\") " pod="openstack/neutron-895b-account-create-update-rbv8b" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.258242 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9pfp\" (UniqueName: \"kubernetes.io/projected/ea097e48-d917-409a-befa-14d0ba6dc67b-kube-api-access-w9pfp\") pod \"keystone-db-sync-6m6wj\" (UID: \"ea097e48-d917-409a-befa-14d0ba6dc67b\") " pod="openstack/keystone-db-sync-6m6wj" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.258290 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea097e48-d917-409a-befa-14d0ba6dc67b-combined-ca-bundle\") pod \"keystone-db-sync-6m6wj\" (UID: \"ea097e48-d917-409a-befa-14d0ba6dc67b\") " pod="openstack/keystone-db-sync-6m6wj" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.258496 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea097e48-d917-409a-befa-14d0ba6dc67b-config-data\") pod \"keystone-db-sync-6m6wj\" (UID: \"ea097e48-d917-409a-befa-14d0ba6dc67b\") " pod="openstack/keystone-db-sync-6m6wj" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.261856 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c23a-account-create-update-mrnds" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.262935 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea097e48-d917-409a-befa-14d0ba6dc67b-combined-ca-bundle\") pod \"keystone-db-sync-6m6wj\" (UID: \"ea097e48-d917-409a-befa-14d0ba6dc67b\") " pod="openstack/keystone-db-sync-6m6wj" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.263344 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea097e48-d917-409a-befa-14d0ba6dc67b-config-data\") pod \"keystone-db-sync-6m6wj\" (UID: \"ea097e48-d917-409a-befa-14d0ba6dc67b\") " pod="openstack/keystone-db-sync-6m6wj" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.282714 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9pfp\" (UniqueName: \"kubernetes.io/projected/ea097e48-d917-409a-befa-14d0ba6dc67b-kube-api-access-w9pfp\") pod \"keystone-db-sync-6m6wj\" (UID: \"ea097e48-d917-409a-befa-14d0ba6dc67b\") " pod="openstack/keystone-db-sync-6m6wj" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.360604 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/053e2796-2bef-48ed-a1c2-47917558ad1a-operator-scripts\") pod \"neutron-895b-account-create-update-rbv8b\" (UID: \"053e2796-2bef-48ed-a1c2-47917558ad1a\") " pod="openstack/neutron-895b-account-create-update-rbv8b" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.360681 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm2cq\" (UniqueName: \"kubernetes.io/projected/053e2796-2bef-48ed-a1c2-47917558ad1a-kube-api-access-vm2cq\") pod \"neutron-895b-account-create-update-rbv8b\" (UID: \"053e2796-2bef-48ed-a1c2-47917558ad1a\") " pod="openstack/neutron-895b-account-create-update-rbv8b" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.361390 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/053e2796-2bef-48ed-a1c2-47917558ad1a-operator-scripts\") pod \"neutron-895b-account-create-update-rbv8b\" (UID: \"053e2796-2bef-48ed-a1c2-47917558ad1a\") " pod="openstack/neutron-895b-account-create-update-rbv8b" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.378543 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm2cq\" (UniqueName: \"kubernetes.io/projected/053e2796-2bef-48ed-a1c2-47917558ad1a-kube-api-access-vm2cq\") pod \"neutron-895b-account-create-update-rbv8b\" (UID: \"053e2796-2bef-48ed-a1c2-47917558ad1a\") " pod="openstack/neutron-895b-account-create-update-rbv8b" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.429599 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6m6wj" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.465703 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-895b-account-create-update-rbv8b" Dec 11 18:17:13 crc kubenswrapper[4877]: I1211 18:17:13.951725 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 11 18:17:14 crc kubenswrapper[4877]: I1211 18:17:14.432961 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xg49p"] Dec 11 18:17:14 crc kubenswrapper[4877]: I1211 18:17:14.436957 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:14 crc kubenswrapper[4877]: I1211 18:17:14.455615 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xg49p"] Dec 11 18:17:14 crc kubenswrapper[4877]: I1211 18:17:14.584056 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hs5\" (UniqueName: \"kubernetes.io/projected/5db9e624-35cd-40fd-b0f8-52b00006e5c6-kube-api-access-j9hs5\") pod \"redhat-marketplace-xg49p\" (UID: \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\") " pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:14 crc kubenswrapper[4877]: I1211 18:17:14.584257 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db9e624-35cd-40fd-b0f8-52b00006e5c6-catalog-content\") pod \"redhat-marketplace-xg49p\" (UID: \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\") " pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:14 crc kubenswrapper[4877]: I1211 18:17:14.584348 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db9e624-35cd-40fd-b0f8-52b00006e5c6-utilities\") pod \"redhat-marketplace-xg49p\" (UID: \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\") " pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:14 crc kubenswrapper[4877]: I1211 18:17:14.687037 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9hs5\" (UniqueName: \"kubernetes.io/projected/5db9e624-35cd-40fd-b0f8-52b00006e5c6-kube-api-access-j9hs5\") pod \"redhat-marketplace-xg49p\" (UID: \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\") " pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:14 crc kubenswrapper[4877]: I1211 18:17:14.687132 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db9e624-35cd-40fd-b0f8-52b00006e5c6-catalog-content\") pod \"redhat-marketplace-xg49p\" (UID: \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\") " pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:14 crc kubenswrapper[4877]: I1211 18:17:14.687168 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db9e624-35cd-40fd-b0f8-52b00006e5c6-utilities\") pod \"redhat-marketplace-xg49p\" (UID: \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\") " pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:14 crc kubenswrapper[4877]: I1211 18:17:14.687834 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db9e624-35cd-40fd-b0f8-52b00006e5c6-utilities\") pod \"redhat-marketplace-xg49p\" (UID: \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\") " pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:14 crc kubenswrapper[4877]: I1211 18:17:14.687902 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db9e624-35cd-40fd-b0f8-52b00006e5c6-catalog-content\") pod \"redhat-marketplace-xg49p\" (UID: \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\") " pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:14 crc kubenswrapper[4877]: I1211 18:17:14.709151 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9hs5\" (UniqueName: \"kubernetes.io/projected/5db9e624-35cd-40fd-b0f8-52b00006e5c6-kube-api-access-j9hs5\") pod \"redhat-marketplace-xg49p\" (UID: \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\") " pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:14 crc kubenswrapper[4877]: I1211 18:17:14.760118 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:16 crc kubenswrapper[4877]: I1211 18:17:16.638126 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:17:16 crc kubenswrapper[4877]: I1211 18:17:16.638525 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:17:16 crc kubenswrapper[4877]: I1211 18:17:16.638582 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:17:16 crc kubenswrapper[4877]: I1211 18:17:16.639214 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf1d9959e41610cc03f269ef917fbce5242b11790b9a8a9c1fa1169950769bd5"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 18:17:16 crc kubenswrapper[4877]: I1211 18:17:16.639279 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://bf1d9959e41610cc03f269ef917fbce5242b11790b9a8a9c1fa1169950769bd5" gracePeriod=600 Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.523471 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="bf1d9959e41610cc03f269ef917fbce5242b11790b9a8a9c1fa1169950769bd5" exitCode=0 Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.523518 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"bf1d9959e41610cc03f269ef917fbce5242b11790b9a8a9c1fa1169950769bd5"} Dec 11 18:17:17 crc kubenswrapper[4877]: W1211 18:17:17.627821 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd76e4cfb_c4b9_464c_be7e_440efa73932e.slice/crio-ca2dec24b3c6b700e2f0233e6654fac0a92c1aeedc61e86ddda64428cc507fa3 WatchSource:0}: Error finding container ca2dec24b3c6b700e2f0233e6654fac0a92c1aeedc61e86ddda64428cc507fa3: Status 404 returned error can't find the container with id ca2dec24b3c6b700e2f0233e6654fac0a92c1aeedc61e86ddda64428cc507fa3 Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.741681 4877 scope.go:117] "RemoveContainer" containerID="1be556a7cab86bd220f13c4a3bb5b9793560f6a9dd4e865164ecd02ed46d16e3" Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.859953 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.975269 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-run\") pod \"91672b3b-2705-49d0-8905-08768f60ab7b\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.976076 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-run" (OuterVolumeSpecName: "var-run") pod "91672b3b-2705-49d0-8905-08768f60ab7b" (UID: "91672b3b-2705-49d0-8905-08768f60ab7b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.976538 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-run-ovn\") pod \"91672b3b-2705-49d0-8905-08768f60ab7b\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.976612 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91672b3b-2705-49d0-8905-08768f60ab7b-additional-scripts\") pod \"91672b3b-2705-49d0-8905-08768f60ab7b\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.976650 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91672b3b-2705-49d0-8905-08768f60ab7b-scripts\") pod \"91672b3b-2705-49d0-8905-08768f60ab7b\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.976671 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-log-ovn\") pod \"91672b3b-2705-49d0-8905-08768f60ab7b\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.976784 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sxhp\" (UniqueName: \"kubernetes.io/projected/91672b3b-2705-49d0-8905-08768f60ab7b-kube-api-access-2sxhp\") pod \"91672b3b-2705-49d0-8905-08768f60ab7b\" (UID: \"91672b3b-2705-49d0-8905-08768f60ab7b\") " Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.976956 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "91672b3b-2705-49d0-8905-08768f60ab7b" (UID: "91672b3b-2705-49d0-8905-08768f60ab7b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.978517 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91672b3b-2705-49d0-8905-08768f60ab7b-scripts" (OuterVolumeSpecName: "scripts") pod "91672b3b-2705-49d0-8905-08768f60ab7b" (UID: "91672b3b-2705-49d0-8905-08768f60ab7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.978564 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "91672b3b-2705-49d0-8905-08768f60ab7b" (UID: "91672b3b-2705-49d0-8905-08768f60ab7b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.978871 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91672b3b-2705-49d0-8905-08768f60ab7b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "91672b3b-2705-49d0-8905-08768f60ab7b" (UID: "91672b3b-2705-49d0-8905-08768f60ab7b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.980626 4877 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-run\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.980655 4877 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.980669 4877 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91672b3b-2705-49d0-8905-08768f60ab7b-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.980683 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91672b3b-2705-49d0-8905-08768f60ab7b-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.980693 4877 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91672b3b-2705-49d0-8905-08768f60ab7b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:17 crc kubenswrapper[4877]: I1211 18:17:17.991580 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91672b3b-2705-49d0-8905-08768f60ab7b-kube-api-access-2sxhp" (OuterVolumeSpecName: "kube-api-access-2sxhp") pod "91672b3b-2705-49d0-8905-08768f60ab7b" (UID: "91672b3b-2705-49d0-8905-08768f60ab7b"). InnerVolumeSpecName "kube-api-access-2sxhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.009181 4877 scope.go:117] "RemoveContainer" containerID="52a68b777dfc73ace634a14b4367da1ba72971db5dac33c5a162d17e68584a68" Dec 11 18:17:18 crc kubenswrapper[4877]: E1211 18:17:18.010327 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52a68b777dfc73ace634a14b4367da1ba72971db5dac33c5a162d17e68584a68\": container with ID starting with 52a68b777dfc73ace634a14b4367da1ba72971db5dac33c5a162d17e68584a68 not found: ID does not exist" containerID="52a68b777dfc73ace634a14b4367da1ba72971db5dac33c5a162d17e68584a68" Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.010391 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52a68b777dfc73ace634a14b4367da1ba72971db5dac33c5a162d17e68584a68"} err="failed to get container status \"52a68b777dfc73ace634a14b4367da1ba72971db5dac33c5a162d17e68584a68\": rpc error: code = NotFound desc = could not find container \"52a68b777dfc73ace634a14b4367da1ba72971db5dac33c5a162d17e68584a68\": container with ID starting with 52a68b777dfc73ace634a14b4367da1ba72971db5dac33c5a162d17e68584a68 not found: ID does not exist" Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.010430 4877 scope.go:117] "RemoveContainer" containerID="1be556a7cab86bd220f13c4a3bb5b9793560f6a9dd4e865164ecd02ed46d16e3" Dec 11 18:17:18 crc kubenswrapper[4877]: E1211 18:17:18.010800 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1be556a7cab86bd220f13c4a3bb5b9793560f6a9dd4e865164ecd02ed46d16e3\": container with ID starting with 1be556a7cab86bd220f13c4a3bb5b9793560f6a9dd4e865164ecd02ed46d16e3 not found: ID does not exist" containerID="1be556a7cab86bd220f13c4a3bb5b9793560f6a9dd4e865164ecd02ed46d16e3" Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.010840 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1be556a7cab86bd220f13c4a3bb5b9793560f6a9dd4e865164ecd02ed46d16e3"} err="failed to get container status \"1be556a7cab86bd220f13c4a3bb5b9793560f6a9dd4e865164ecd02ed46d16e3\": rpc error: code = NotFound desc = could not find container \"1be556a7cab86bd220f13c4a3bb5b9793560f6a9dd4e865164ecd02ed46d16e3\": container with ID starting with 1be556a7cab86bd220f13c4a3bb5b9793560f6a9dd4e865164ecd02ed46d16e3 not found: ID does not exist" Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.010868 4877 scope.go:117] "RemoveContainer" containerID="a27d4e9d33f15b85376f195565e3ec7c0836b3706bd7f69f9b9e7b666aacdfa3" Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.082081 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sxhp\" (UniqueName: \"kubernetes.io/projected/91672b3b-2705-49d0-8905-08768f60ab7b-kube-api-access-2sxhp\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.201383 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-tmjmw"] Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.545318 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zdz6c-config-4ngh6" event={"ID":"91672b3b-2705-49d0-8905-08768f60ab7b","Type":"ContainerDied","Data":"eb9b0f6d99d5929ad3557f7fb2cf9566d7cdd497096d7c921390fdcd6e57f974"} Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.545357 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb9b0f6d99d5929ad3557f7fb2cf9566d7cdd497096d7c921390fdcd6e57f974" Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.545481 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zdz6c-config-4ngh6" Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.559948 4877 generic.go:334] "Generic (PLEG): container finished" podID="d76e4cfb-c4b9-464c-be7e-440efa73932e" containerID="04d2788df77d3a881d52bc17092ddd02065eff4de7f2545f2e93444ab46a5604" exitCode=0 Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.560219 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qv9nh" event={"ID":"d76e4cfb-c4b9-464c-be7e-440efa73932e","Type":"ContainerDied","Data":"04d2788df77d3a881d52bc17092ddd02065eff4de7f2545f2e93444ab46a5604"} Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.560298 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qv9nh" event={"ID":"d76e4cfb-c4b9-464c-be7e-440efa73932e","Type":"ContainerStarted","Data":"ca2dec24b3c6b700e2f0233e6654fac0a92c1aeedc61e86ddda64428cc507fa3"} Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.565206 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.571654 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"77165bed566223956d79451be46ee9e0e54607425e94b061474a87842819a95a"} Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.577555 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tmjmw" event={"ID":"0c6f25ca-ff23-47cf-99f9-eb8355c546ec","Type":"ContainerStarted","Data":"e748d3ef751ca956b3dd78aeaca8b81eadd9778584551744f54c214060018e95"} Dec 11 18:17:18 crc kubenswrapper[4877]: W1211 18:17:18.687247 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83845faf_f287_4962_afff_966bfac050eb.slice/crio-52f28974630bf6650fd7efa4113831177951cef6ece03cf28980d1fd0a93dc2d WatchSource:0}: Error finding container 52f28974630bf6650fd7efa4113831177951cef6ece03cf28980d1fd0a93dc2d: Status 404 returned error can't find the container with id 52f28974630bf6650fd7efa4113831177951cef6ece03cf28980d1fd0a93dc2d Dec 11 18:17:18 crc kubenswrapper[4877]: W1211 18:17:18.699665 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea097e48_d917_409a_befa_14d0ba6dc67b.slice/crio-82066feaadd4dc39611c2942234f4fa1563252b5038301499358785b3f9ce680 WatchSource:0}: Error finding container 82066feaadd4dc39611c2942234f4fa1563252b5038301499358785b3f9ce680: Status 404 returned error can't find the container with id 82066feaadd4dc39611c2942234f4fa1563252b5038301499358785b3f9ce680 Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.718492 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c23a-account-create-update-mrnds"] Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.756133 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-895b-account-create-update-rbv8b"] Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.765527 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6dd3-account-create-update-xwr7s"] Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.775215 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6m6wj"] Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.840592 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rlb72"] Dec 11 18:17:18 crc kubenswrapper[4877]: W1211 18:17:18.844174 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8aa10530_60c2_46d2_8a52_7422281745bf.slice/crio-0e6dde43adde05a4b90938234806f9b87fa1f350bfea332216c64a9470a28d05 WatchSource:0}: Error finding container 0e6dde43adde05a4b90938234806f9b87fa1f350bfea332216c64a9470a28d05: Status 404 returned error can't find the container with id 0e6dde43adde05a4b90938234806f9b87fa1f350bfea332216c64a9470a28d05 Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.889559 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qj6wx"] Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.903289 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xg49p"] Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.961081 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zdz6c-config-4ngh6"] Dec 11 18:17:18 crc kubenswrapper[4877]: I1211 18:17:18.974569 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zdz6c-config-4ngh6"] Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.228882 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91672b3b-2705-49d0-8905-08768f60ab7b" path="/var/lib/kubelet/pods/91672b3b-2705-49d0-8905-08768f60ab7b/volumes" Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.591583 4877 generic.go:334] "Generic (PLEG): container finished" podID="5db9e624-35cd-40fd-b0f8-52b00006e5c6" containerID="3e416fa0c4700d91d71b394f9d66e3ed50a754cab7a806eea8befb7b84e83242" exitCode=0 Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.591864 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xg49p" event={"ID":"5db9e624-35cd-40fd-b0f8-52b00006e5c6","Type":"ContainerDied","Data":"3e416fa0c4700d91d71b394f9d66e3ed50a754cab7a806eea8befb7b84e83242"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.591917 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xg49p" event={"ID":"5db9e624-35cd-40fd-b0f8-52b00006e5c6","Type":"ContainerStarted","Data":"4d2933e4f0da8e69cd070cc0c11479523389695f4b63c613dff99a5384bf33af"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.593821 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6m6wj" event={"ID":"ea097e48-d917-409a-befa-14d0ba6dc67b","Type":"ContainerStarted","Data":"82066feaadd4dc39611c2942234f4fa1563252b5038301499358785b3f9ce680"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.598809 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rlb72" event={"ID":"8aa10530-60c2-46d2-8a52-7422281745bf","Type":"ContainerStarted","Data":"81aa9b87184d399502c75da40beaa9e7ad75594780d8593f10ec43b1591f8de8"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.598851 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rlb72" event={"ID":"8aa10530-60c2-46d2-8a52-7422281745bf","Type":"ContainerStarted","Data":"0e6dde43adde05a4b90938234806f9b87fa1f350bfea332216c64a9470a28d05"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.644064 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6dd3-account-create-update-xwr7s" event={"ID":"89c691a5-8b64-4aee-8833-3453f25422ce","Type":"ContainerStarted","Data":"eb2b930d88003b2b206cca782b71ce27c67dae8e0d903bebffc95bbac1e76348"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.644121 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6dd3-account-create-update-xwr7s" event={"ID":"89c691a5-8b64-4aee-8833-3453f25422ce","Type":"ContainerStarted","Data":"4c54936be4b9c7581557ff2998bb097452f2b202fcf4387ca5b575a9127b3e4b"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.648018 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-895b-account-create-update-rbv8b" event={"ID":"053e2796-2bef-48ed-a1c2-47917558ad1a","Type":"ContainerStarted","Data":"390c072bf6e41669bf8fd85475f140d4c93f52c3d40264349af5a87e2239ba9e"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.648104 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-895b-account-create-update-rbv8b" event={"ID":"053e2796-2bef-48ed-a1c2-47917558ad1a","Type":"ContainerStarted","Data":"e4e7f176bd60335ef2c09e825dc5e0bc466ea6ea6e66a7aace18ab91e86bc19b"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.657248 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c23a-account-create-update-mrnds" event={"ID":"83845faf-f287-4962-afff-966bfac050eb","Type":"ContainerStarted","Data":"9cc51c4edb729c10f60ac6e729203762d3b929ce0cf9e75a9290889697e2bc8e"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.657356 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c23a-account-create-update-mrnds" event={"ID":"83845faf-f287-4962-afff-966bfac050eb","Type":"ContainerStarted","Data":"52f28974630bf6650fd7efa4113831177951cef6ece03cf28980d1fd0a93dc2d"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.661894 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"9c09f426fb5c2a7ea7e554ea6599d10e15219548ecdc3835329dcf91cc0f8f26"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.664900 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6dd3-account-create-update-xwr7s" podStartSLOduration=7.664881604 podStartE2EDuration="7.664881604s" podCreationTimestamp="2025-12-11 18:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:17:19.662391028 +0000 UTC m=+1000.688635072" watchObservedRunningTime="2025-12-11 18:17:19.664881604 +0000 UTC m=+1000.691125648" Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.667435 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-rlb72" podStartSLOduration=7.667407342 podStartE2EDuration="7.667407342s" podCreationTimestamp="2025-12-11 18:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:17:19.63286099 +0000 UTC m=+1000.659105054" watchObservedRunningTime="2025-12-11 18:17:19.667407342 +0000 UTC m=+1000.693651396" Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.683616 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tmjmw" event={"ID":"0c6f25ca-ff23-47cf-99f9-eb8355c546ec","Type":"ContainerStarted","Data":"625d7b120449ed4f96310fee4fe84f0381f969b7acbf967997e7904697b947aa"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.686922 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7hhdv" event={"ID":"9b58d9e9-69e6-42e8-86eb-538ac26c6340","Type":"ContainerStarted","Data":"54275e14fec872dbf7c5e1b7d14a49937f3c11dc32fb3d8a02dd16c24ca9cf3b"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.688283 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-895b-account-create-update-rbv8b" podStartSLOduration=6.688255028 podStartE2EDuration="6.688255028s" podCreationTimestamp="2025-12-11 18:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:17:19.682229827 +0000 UTC m=+1000.708473871" watchObservedRunningTime="2025-12-11 18:17:19.688255028 +0000 UTC m=+1000.714499072" Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.690205 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qj6wx" event={"ID":"eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7","Type":"ContainerStarted","Data":"d8c12f76524d085466f0a341aa8bc01d54554c588f5ddf3b3abc5e2e51a2b1e6"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.690256 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qj6wx" event={"ID":"eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7","Type":"ContainerStarted","Data":"267fdf03028796221fa6ae7d5ec902dcef4c86512bb5727633e3561d0ae7d12f"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.697259 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qv9nh" event={"ID":"d76e4cfb-c4b9-464c-be7e-440efa73932e","Type":"ContainerStarted","Data":"dd9c5c05cda1b5efb1178a2889d49b35fd1281691cd016740b0e213f7c026ef0"} Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.720020 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-c23a-account-create-update-mrnds" podStartSLOduration=7.719979155 podStartE2EDuration="7.719979155s" podCreationTimestamp="2025-12-11 18:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:17:19.697652209 +0000 UTC m=+1000.723896253" watchObservedRunningTime="2025-12-11 18:17:19.719979155 +0000 UTC m=+1000.746223199" Dec 11 18:17:19 crc kubenswrapper[4877]: I1211 18:17:19.790671 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-7hhdv" podStartSLOduration=3.828014425 podStartE2EDuration="19.790641051s" podCreationTimestamp="2025-12-11 18:17:00 +0000 UTC" firstStartedPulling="2025-12-11 18:17:01.846088042 +0000 UTC m=+982.872332086" lastFinishedPulling="2025-12-11 18:17:17.808714668 +0000 UTC m=+998.834958712" observedRunningTime="2025-12-11 18:17:19.778163698 +0000 UTC m=+1000.804407752" watchObservedRunningTime="2025-12-11 18:17:19.790641051 +0000 UTC m=+1000.816885095" Dec 11 18:17:20 crc kubenswrapper[4877]: I1211 18:17:20.713974 4877 generic.go:334] "Generic (PLEG): container finished" podID="83845faf-f287-4962-afff-966bfac050eb" containerID="9cc51c4edb729c10f60ac6e729203762d3b929ce0cf9e75a9290889697e2bc8e" exitCode=0 Dec 11 18:17:20 crc kubenswrapper[4877]: I1211 18:17:20.714142 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c23a-account-create-update-mrnds" event={"ID":"83845faf-f287-4962-afff-966bfac050eb","Type":"ContainerDied","Data":"9cc51c4edb729c10f60ac6e729203762d3b929ce0cf9e75a9290889697e2bc8e"} Dec 11 18:17:20 crc kubenswrapper[4877]: I1211 18:17:20.717435 4877 generic.go:334] "Generic (PLEG): container finished" podID="053e2796-2bef-48ed-a1c2-47917558ad1a" containerID="390c072bf6e41669bf8fd85475f140d4c93f52c3d40264349af5a87e2239ba9e" exitCode=0 Dec 11 18:17:20 crc kubenswrapper[4877]: I1211 18:17:20.717529 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-895b-account-create-update-rbv8b" event={"ID":"053e2796-2bef-48ed-a1c2-47917558ad1a","Type":"ContainerDied","Data":"390c072bf6e41669bf8fd85475f140d4c93f52c3d40264349af5a87e2239ba9e"} Dec 11 18:17:20 crc kubenswrapper[4877]: I1211 18:17:20.719847 4877 generic.go:334] "Generic (PLEG): container finished" podID="8aa10530-60c2-46d2-8a52-7422281745bf" containerID="81aa9b87184d399502c75da40beaa9e7ad75594780d8593f10ec43b1591f8de8" exitCode=0 Dec 11 18:17:20 crc kubenswrapper[4877]: I1211 18:17:20.719896 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rlb72" event={"ID":"8aa10530-60c2-46d2-8a52-7422281745bf","Type":"ContainerDied","Data":"81aa9b87184d399502c75da40beaa9e7ad75594780d8593f10ec43b1591f8de8"} Dec 11 18:17:20 crc kubenswrapper[4877]: I1211 18:17:20.722447 4877 generic.go:334] "Generic (PLEG): container finished" podID="0c6f25ca-ff23-47cf-99f9-eb8355c546ec" containerID="625d7b120449ed4f96310fee4fe84f0381f969b7acbf967997e7904697b947aa" exitCode=0 Dec 11 18:17:20 crc kubenswrapper[4877]: I1211 18:17:20.722534 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tmjmw" event={"ID":"0c6f25ca-ff23-47cf-99f9-eb8355c546ec","Type":"ContainerDied","Data":"625d7b120449ed4f96310fee4fe84f0381f969b7acbf967997e7904697b947aa"} Dec 11 18:17:20 crc kubenswrapper[4877]: I1211 18:17:20.724187 4877 generic.go:334] "Generic (PLEG): container finished" podID="eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7" containerID="d8c12f76524d085466f0a341aa8bc01d54554c588f5ddf3b3abc5e2e51a2b1e6" exitCode=0 Dec 11 18:17:20 crc kubenswrapper[4877]: I1211 18:17:20.724263 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qj6wx" event={"ID":"eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7","Type":"ContainerDied","Data":"d8c12f76524d085466f0a341aa8bc01d54554c588f5ddf3b3abc5e2e51a2b1e6"} Dec 11 18:17:20 crc kubenswrapper[4877]: I1211 18:17:20.732462 4877 generic.go:334] "Generic (PLEG): container finished" podID="d76e4cfb-c4b9-464c-be7e-440efa73932e" containerID="dd9c5c05cda1b5efb1178a2889d49b35fd1281691cd016740b0e213f7c026ef0" exitCode=0 Dec 11 18:17:20 crc kubenswrapper[4877]: I1211 18:17:20.732587 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qv9nh" event={"ID":"d76e4cfb-c4b9-464c-be7e-440efa73932e","Type":"ContainerDied","Data":"dd9c5c05cda1b5efb1178a2889d49b35fd1281691cd016740b0e213f7c026ef0"} Dec 11 18:17:20 crc kubenswrapper[4877]: I1211 18:17:20.740008 4877 generic.go:334] "Generic (PLEG): container finished" podID="89c691a5-8b64-4aee-8833-3453f25422ce" containerID="eb2b930d88003b2b206cca782b71ce27c67dae8e0d903bebffc95bbac1e76348" exitCode=0 Dec 11 18:17:20 crc kubenswrapper[4877]: I1211 18:17:20.740637 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6dd3-account-create-update-xwr7s" event={"ID":"89c691a5-8b64-4aee-8833-3453f25422ce","Type":"ContainerDied","Data":"eb2b930d88003b2b206cca782b71ce27c67dae8e0d903bebffc95bbac1e76348"} Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.255753 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tmjmw" Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.263642 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qj6wx" Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.373110 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dck8h\" (UniqueName: \"kubernetes.io/projected/eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7-kube-api-access-dck8h\") pod \"eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7\" (UID: \"eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7\") " Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.373182 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb8f7\" (UniqueName: \"kubernetes.io/projected/0c6f25ca-ff23-47cf-99f9-eb8355c546ec-kube-api-access-bb8f7\") pod \"0c6f25ca-ff23-47cf-99f9-eb8355c546ec\" (UID: \"0c6f25ca-ff23-47cf-99f9-eb8355c546ec\") " Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.373262 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6f25ca-ff23-47cf-99f9-eb8355c546ec-operator-scripts\") pod \"0c6f25ca-ff23-47cf-99f9-eb8355c546ec\" (UID: \"0c6f25ca-ff23-47cf-99f9-eb8355c546ec\") " Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.373342 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7-operator-scripts\") pod \"eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7\" (UID: \"eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7\") " Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.375157 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7" (UID: "eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.375423 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c6f25ca-ff23-47cf-99f9-eb8355c546ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c6f25ca-ff23-47cf-99f9-eb8355c546ec" (UID: "0c6f25ca-ff23-47cf-99f9-eb8355c546ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.381730 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7-kube-api-access-dck8h" (OuterVolumeSpecName: "kube-api-access-dck8h") pod "eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7" (UID: "eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7"). InnerVolumeSpecName "kube-api-access-dck8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.381926 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6f25ca-ff23-47cf-99f9-eb8355c546ec-kube-api-access-bb8f7" (OuterVolumeSpecName: "kube-api-access-bb8f7") pod "0c6f25ca-ff23-47cf-99f9-eb8355c546ec" (UID: "0c6f25ca-ff23-47cf-99f9-eb8355c546ec"). InnerVolumeSpecName "kube-api-access-bb8f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.477202 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dck8h\" (UniqueName: \"kubernetes.io/projected/eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7-kube-api-access-dck8h\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.477279 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb8f7\" (UniqueName: \"kubernetes.io/projected/0c6f25ca-ff23-47cf-99f9-eb8355c546ec-kube-api-access-bb8f7\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.477300 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c6f25ca-ff23-47cf-99f9-eb8355c546ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.477322 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.757246 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qj6wx" Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.757589 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qj6wx" event={"ID":"eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7","Type":"ContainerDied","Data":"267fdf03028796221fa6ae7d5ec902dcef4c86512bb5727633e3561d0ae7d12f"} Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.757715 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="267fdf03028796221fa6ae7d5ec902dcef4c86512bb5727633e3561d0ae7d12f" Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.759521 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-tmjmw" event={"ID":"0c6f25ca-ff23-47cf-99f9-eb8355c546ec","Type":"ContainerDied","Data":"e748d3ef751ca956b3dd78aeaca8b81eadd9778584551744f54c214060018e95"} Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.759551 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-tmjmw" Dec 11 18:17:21 crc kubenswrapper[4877]: I1211 18:17:21.759567 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e748d3ef751ca956b3dd78aeaca8b81eadd9778584551744f54c214060018e95" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.447217 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-895b-account-create-update-rbv8b" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.461938 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6dd3-account-create-update-xwr7s" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.484585 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rlb72" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.528933 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c23a-account-create-update-mrnds" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.603167 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2b2h\" (UniqueName: \"kubernetes.io/projected/83845faf-f287-4962-afff-966bfac050eb-kube-api-access-c2b2h\") pod \"83845faf-f287-4962-afff-966bfac050eb\" (UID: \"83845faf-f287-4962-afff-966bfac050eb\") " Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.603264 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm2cq\" (UniqueName: \"kubernetes.io/projected/053e2796-2bef-48ed-a1c2-47917558ad1a-kube-api-access-vm2cq\") pod \"053e2796-2bef-48ed-a1c2-47917558ad1a\" (UID: \"053e2796-2bef-48ed-a1c2-47917558ad1a\") " Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.603337 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83845faf-f287-4962-afff-966bfac050eb-operator-scripts\") pod \"83845faf-f287-4962-afff-966bfac050eb\" (UID: \"83845faf-f287-4962-afff-966bfac050eb\") " Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.603713 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vjgl\" (UniqueName: \"kubernetes.io/projected/89c691a5-8b64-4aee-8833-3453f25422ce-kube-api-access-9vjgl\") pod \"89c691a5-8b64-4aee-8833-3453f25422ce\" (UID: \"89c691a5-8b64-4aee-8833-3453f25422ce\") " Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.604049 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89c691a5-8b64-4aee-8833-3453f25422ce-operator-scripts\") pod \"89c691a5-8b64-4aee-8833-3453f25422ce\" (UID: \"89c691a5-8b64-4aee-8833-3453f25422ce\") " Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.604189 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hl5d\" (UniqueName: \"kubernetes.io/projected/8aa10530-60c2-46d2-8a52-7422281745bf-kube-api-access-2hl5d\") pod \"8aa10530-60c2-46d2-8a52-7422281745bf\" (UID: \"8aa10530-60c2-46d2-8a52-7422281745bf\") " Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.604331 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/053e2796-2bef-48ed-a1c2-47917558ad1a-operator-scripts\") pod \"053e2796-2bef-48ed-a1c2-47917558ad1a\" (UID: \"053e2796-2bef-48ed-a1c2-47917558ad1a\") " Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.604527 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aa10530-60c2-46d2-8a52-7422281745bf-operator-scripts\") pod \"8aa10530-60c2-46d2-8a52-7422281745bf\" (UID: \"8aa10530-60c2-46d2-8a52-7422281745bf\") " Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.605854 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83845faf-f287-4962-afff-966bfac050eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83845faf-f287-4962-afff-966bfac050eb" (UID: "83845faf-f287-4962-afff-966bfac050eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.606606 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89c691a5-8b64-4aee-8833-3453f25422ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89c691a5-8b64-4aee-8833-3453f25422ce" (UID: "89c691a5-8b64-4aee-8833-3453f25422ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.607100 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83845faf-f287-4962-afff-966bfac050eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.607918 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053e2796-2bef-48ed-a1c2-47917558ad1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "053e2796-2bef-48ed-a1c2-47917558ad1a" (UID: "053e2796-2bef-48ed-a1c2-47917558ad1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.608039 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aa10530-60c2-46d2-8a52-7422281745bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8aa10530-60c2-46d2-8a52-7422281745bf" (UID: "8aa10530-60c2-46d2-8a52-7422281745bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.645793 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83845faf-f287-4962-afff-966bfac050eb-kube-api-access-c2b2h" (OuterVolumeSpecName: "kube-api-access-c2b2h") pod "83845faf-f287-4962-afff-966bfac050eb" (UID: "83845faf-f287-4962-afff-966bfac050eb"). InnerVolumeSpecName "kube-api-access-c2b2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.645951 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/053e2796-2bef-48ed-a1c2-47917558ad1a-kube-api-access-vm2cq" (OuterVolumeSpecName: "kube-api-access-vm2cq") pod "053e2796-2bef-48ed-a1c2-47917558ad1a" (UID: "053e2796-2bef-48ed-a1c2-47917558ad1a"). InnerVolumeSpecName "kube-api-access-vm2cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.646007 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c691a5-8b64-4aee-8833-3453f25422ce-kube-api-access-9vjgl" (OuterVolumeSpecName: "kube-api-access-9vjgl") pod "89c691a5-8b64-4aee-8833-3453f25422ce" (UID: "89c691a5-8b64-4aee-8833-3453f25422ce"). InnerVolumeSpecName "kube-api-access-9vjgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.646054 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aa10530-60c2-46d2-8a52-7422281745bf-kube-api-access-2hl5d" (OuterVolumeSpecName: "kube-api-access-2hl5d") pod "8aa10530-60c2-46d2-8a52-7422281745bf" (UID: "8aa10530-60c2-46d2-8a52-7422281745bf"). InnerVolumeSpecName "kube-api-access-2hl5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.711680 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vjgl\" (UniqueName: \"kubernetes.io/projected/89c691a5-8b64-4aee-8833-3453f25422ce-kube-api-access-9vjgl\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.711996 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89c691a5-8b64-4aee-8833-3453f25422ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.712057 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hl5d\" (UniqueName: \"kubernetes.io/projected/8aa10530-60c2-46d2-8a52-7422281745bf-kube-api-access-2hl5d\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.712124 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/053e2796-2bef-48ed-a1c2-47917558ad1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.712184 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aa10530-60c2-46d2-8a52-7422281745bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.712244 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2b2h\" (UniqueName: \"kubernetes.io/projected/83845faf-f287-4962-afff-966bfac050eb-kube-api-access-c2b2h\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.712305 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm2cq\" (UniqueName: \"kubernetes.io/projected/053e2796-2bef-48ed-a1c2-47917558ad1a-kube-api-access-vm2cq\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.768240 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-895b-account-create-update-rbv8b" event={"ID":"053e2796-2bef-48ed-a1c2-47917558ad1a","Type":"ContainerDied","Data":"e4e7f176bd60335ef2c09e825dc5e0bc466ea6ea6e66a7aace18ab91e86bc19b"} Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.768289 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4e7f176bd60335ef2c09e825dc5e0bc466ea6ea6e66a7aace18ab91e86bc19b" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.768678 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-895b-account-create-update-rbv8b" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.769741 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6dd3-account-create-update-xwr7s" event={"ID":"89c691a5-8b64-4aee-8833-3453f25422ce","Type":"ContainerDied","Data":"4c54936be4b9c7581557ff2998bb097452f2b202fcf4387ca5b575a9127b3e4b"} Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.769765 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c54936be4b9c7581557ff2998bb097452f2b202fcf4387ca5b575a9127b3e4b" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.769811 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6dd3-account-create-update-xwr7s" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.771540 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rlb72" event={"ID":"8aa10530-60c2-46d2-8a52-7422281745bf","Type":"ContainerDied","Data":"0e6dde43adde05a4b90938234806f9b87fa1f350bfea332216c64a9470a28d05"} Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.771596 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e6dde43adde05a4b90938234806f9b87fa1f350bfea332216c64a9470a28d05" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.771563 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rlb72" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.772758 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c23a-account-create-update-mrnds" event={"ID":"83845faf-f287-4962-afff-966bfac050eb","Type":"ContainerDied","Data":"52f28974630bf6650fd7efa4113831177951cef6ece03cf28980d1fd0a93dc2d"} Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.772781 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52f28974630bf6650fd7efa4113831177951cef6ece03cf28980d1fd0a93dc2d" Dec 11 18:17:22 crc kubenswrapper[4877]: I1211 18:17:22.772791 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c23a-account-create-update-mrnds" Dec 11 18:17:26 crc kubenswrapper[4877]: I1211 18:17:26.816838 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6m6wj" event={"ID":"ea097e48-d917-409a-befa-14d0ba6dc67b","Type":"ContainerStarted","Data":"ff25c374ca4c92b1925fc2de135eecfc0271a98afaa2c0fef9039ed0f69b2a97"} Dec 11 18:17:26 crc kubenswrapper[4877]: I1211 18:17:26.829628 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"5e5da185c3e1de10f4a4fb20185df35b478edf973ba72ed8165d405d52d095a8"} Dec 11 18:17:26 crc kubenswrapper[4877]: I1211 18:17:26.829689 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"a4bdcb32b10aad700a9d1a144873c4ac2c2a2c2ea0f5dd29100b848bccfbec58"} Dec 11 18:17:26 crc kubenswrapper[4877]: I1211 18:17:26.845540 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qv9nh" event={"ID":"d76e4cfb-c4b9-464c-be7e-440efa73932e","Type":"ContainerStarted","Data":"735224bfbcc29fe38e9a9c8e7a99bbd8a893a7140bc0fe297ce45b5d0898cd60"} Dec 11 18:17:26 crc kubenswrapper[4877]: I1211 18:17:26.850489 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6m6wj" podStartSLOduration=6.225346424 podStartE2EDuration="13.850453437s" podCreationTimestamp="2025-12-11 18:17:13 +0000 UTC" firstStartedPulling="2025-12-11 18:17:18.701535805 +0000 UTC m=+999.727779849" lastFinishedPulling="2025-12-11 18:17:26.326642778 +0000 UTC m=+1007.352886862" observedRunningTime="2025-12-11 18:17:26.837762238 +0000 UTC m=+1007.864006292" watchObservedRunningTime="2025-12-11 18:17:26.850453437 +0000 UTC m=+1007.876697501" Dec 11 18:17:26 crc kubenswrapper[4877]: I1211 18:17:26.851166 4877 generic.go:334] "Generic (PLEG): container finished" podID="5db9e624-35cd-40fd-b0f8-52b00006e5c6" containerID="74b6213c1e7b8c1dd67509ada0e7bfe08990ff5220a947bb53c1b26eae3eb9c2" exitCode=0 Dec 11 18:17:26 crc kubenswrapper[4877]: I1211 18:17:26.851233 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xg49p" event={"ID":"5db9e624-35cd-40fd-b0f8-52b00006e5c6","Type":"ContainerDied","Data":"74b6213c1e7b8c1dd67509ada0e7bfe08990ff5220a947bb53c1b26eae3eb9c2"} Dec 11 18:17:26 crc kubenswrapper[4877]: I1211 18:17:26.882180 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qv9nh" podStartSLOduration=12.131555542 podStartE2EDuration="19.882150553s" podCreationTimestamp="2025-12-11 18:17:07 +0000 UTC" firstStartedPulling="2025-12-11 18:17:18.572359658 +0000 UTC m=+999.598603702" lastFinishedPulling="2025-12-11 18:17:26.322954639 +0000 UTC m=+1007.349198713" observedRunningTime="2025-12-11 18:17:26.87343737 +0000 UTC m=+1007.899681424" watchObservedRunningTime="2025-12-11 18:17:26.882150553 +0000 UTC m=+1007.908394607" Dec 11 18:17:27 crc kubenswrapper[4877]: I1211 18:17:27.871804 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"76fe5cd62d71959f65856e35d298523c68eb83379e851821ac88e1ae899ec1ae"} Dec 11 18:17:27 crc kubenswrapper[4877]: I1211 18:17:27.872095 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"26de4816f75855fd69068d928baf5700c093069a69cfa38ac76029f3e105fa45"} Dec 11 18:17:28 crc kubenswrapper[4877]: I1211 18:17:28.151965 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:28 crc kubenswrapper[4877]: I1211 18:17:28.152415 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:28 crc kubenswrapper[4877]: I1211 18:17:28.885041 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xg49p" event={"ID":"5db9e624-35cd-40fd-b0f8-52b00006e5c6","Type":"ContainerStarted","Data":"22acdadae75d5eb509857d8a23344618c24fa57f09eb03cfe634e1dcad237a86"} Dec 11 18:17:28 crc kubenswrapper[4877]: I1211 18:17:28.907528 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xg49p" podStartSLOduration=6.456200952 podStartE2EDuration="14.907509143s" podCreationTimestamp="2025-12-11 18:17:14 +0000 UTC" firstStartedPulling="2025-12-11 18:17:19.593595892 +0000 UTC m=+1000.619839936" lastFinishedPulling="2025-12-11 18:17:28.044904083 +0000 UTC m=+1009.071148127" observedRunningTime="2025-12-11 18:17:28.905864319 +0000 UTC m=+1009.932108383" watchObservedRunningTime="2025-12-11 18:17:28.907509143 +0000 UTC m=+1009.933753187" Dec 11 18:17:29 crc kubenswrapper[4877]: I1211 18:17:29.208509 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qv9nh" podUID="d76e4cfb-c4b9-464c-be7e-440efa73932e" containerName="registry-server" probeResult="failure" output=< Dec 11 18:17:29 crc kubenswrapper[4877]: timeout: failed to connect service ":50051" within 1s Dec 11 18:17:29 crc kubenswrapper[4877]: > Dec 11 18:17:29 crc kubenswrapper[4877]: I1211 18:17:29.904854 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"665dcde91fa7592851f36079d401322306fe9e36387b820a2d9e12f6c0214344"} Dec 11 18:17:29 crc kubenswrapper[4877]: I1211 18:17:29.904952 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"8dc20bf66fa9e057f2dc98ce4fef0736432ae4808cc6e1e91b0dead567c5c35e"} Dec 11 18:17:29 crc kubenswrapper[4877]: I1211 18:17:29.904971 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"1b9670a33a580df82a468b171b52b97bd90dc2e4e8829fc49b7631f5f7985777"} Dec 11 18:17:29 crc kubenswrapper[4877]: I1211 18:17:29.904984 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"47a1b725dc0854718be5f4ac260c2f10112f78115d3c4dfc43d8c461dd0e8e5f"} Dec 11 18:17:31 crc kubenswrapper[4877]: I1211 18:17:31.929926 4877 generic.go:334] "Generic (PLEG): container finished" podID="ea097e48-d917-409a-befa-14d0ba6dc67b" containerID="ff25c374ca4c92b1925fc2de135eecfc0271a98afaa2c0fef9039ed0f69b2a97" exitCode=0 Dec 11 18:17:31 crc kubenswrapper[4877]: I1211 18:17:31.930043 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6m6wj" event={"ID":"ea097e48-d917-409a-befa-14d0ba6dc67b","Type":"ContainerDied","Data":"ff25c374ca4c92b1925fc2de135eecfc0271a98afaa2c0fef9039ed0f69b2a97"} Dec 11 18:17:31 crc kubenswrapper[4877]: I1211 18:17:31.948888 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"e97a9bf3540a73b28c6fd62c68e3b1b99433c2592b012f41f73a3bae93cfff32"} Dec 11 18:17:31 crc kubenswrapper[4877]: I1211 18:17:31.948953 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"6e43a53e35d989a64cbdd6d24926a0516d3b22cf34fba7edab22c929e8c3fa66"} Dec 11 18:17:32 crc kubenswrapper[4877]: I1211 18:17:32.963353 4877 generic.go:334] "Generic (PLEG): container finished" podID="9b58d9e9-69e6-42e8-86eb-538ac26c6340" containerID="54275e14fec872dbf7c5e1b7d14a49937f3c11dc32fb3d8a02dd16c24ca9cf3b" exitCode=0 Dec 11 18:17:32 crc kubenswrapper[4877]: I1211 18:17:32.963632 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7hhdv" event={"ID":"9b58d9e9-69e6-42e8-86eb-538ac26c6340","Type":"ContainerDied","Data":"54275e14fec872dbf7c5e1b7d14a49937f3c11dc32fb3d8a02dd16c24ca9cf3b"} Dec 11 18:17:32 crc kubenswrapper[4877]: I1211 18:17:32.977501 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"e6128ee467515a3a4a4c8c53f6151887ac9fab82bd4ebfc8896e245d0d52eea8"} Dec 11 18:17:32 crc kubenswrapper[4877]: I1211 18:17:32.977577 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"d6dc21943164763ad2d0cf3be69d4f83659780c72fc0c0221ec338f1f4b285af"} Dec 11 18:17:32 crc kubenswrapper[4877]: I1211 18:17:32.977588 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"0c08fea1e5f54f34d6134bdff67d1c67b0ba4f0585eb349d3c7f3d8c8fd65a7c"} Dec 11 18:17:32 crc kubenswrapper[4877]: I1211 18:17:32.977597 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"235930edcc7bb230ad3391eb9f3280bd19f4284814d9383c8b9e2a40a1e3538b"} Dec 11 18:17:32 crc kubenswrapper[4877]: I1211 18:17:32.977606 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c6eb39a0-5f8c-44d1-b27e-c946c850a539","Type":"ContainerStarted","Data":"c15fc0fdb2bbc8eeacbba564c8eca7af73b465fecef249d9de95f0f1e37a5269"} Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.030115 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.174825573 podStartE2EDuration="56.030088993s" podCreationTimestamp="2025-12-11 18:16:37 +0000 UTC" firstStartedPulling="2025-12-11 18:17:18.615805117 +0000 UTC m=+999.642049161" lastFinishedPulling="2025-12-11 18:17:31.471068537 +0000 UTC m=+1012.497312581" observedRunningTime="2025-12-11 18:17:33.024553796 +0000 UTC m=+1014.050797860" watchObservedRunningTime="2025-12-11 18:17:33.030088993 +0000 UTC m=+1014.056333047" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.383560 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6m6wj" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.445466 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9m9np"] Dec 11 18:17:33 crc kubenswrapper[4877]: E1211 18:17:33.445902 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7" containerName="mariadb-database-create" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.445925 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7" containerName="mariadb-database-create" Dec 11 18:17:33 crc kubenswrapper[4877]: E1211 18:17:33.445939 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aa10530-60c2-46d2-8a52-7422281745bf" containerName="mariadb-database-create" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.445948 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aa10530-60c2-46d2-8a52-7422281745bf" containerName="mariadb-database-create" Dec 11 18:17:33 crc kubenswrapper[4877]: E1211 18:17:33.445967 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83845faf-f287-4962-afff-966bfac050eb" containerName="mariadb-account-create-update" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.445974 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="83845faf-f287-4962-afff-966bfac050eb" containerName="mariadb-account-create-update" Dec 11 18:17:33 crc kubenswrapper[4877]: E1211 18:17:33.445983 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c691a5-8b64-4aee-8833-3453f25422ce" containerName="mariadb-account-create-update" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.445989 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c691a5-8b64-4aee-8833-3453f25422ce" containerName="mariadb-account-create-update" Dec 11 18:17:33 crc kubenswrapper[4877]: E1211 18:17:33.446000 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6f25ca-ff23-47cf-99f9-eb8355c546ec" containerName="mariadb-database-create" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.446007 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6f25ca-ff23-47cf-99f9-eb8355c546ec" containerName="mariadb-database-create" Dec 11 18:17:33 crc kubenswrapper[4877]: E1211 18:17:33.446018 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea097e48-d917-409a-befa-14d0ba6dc67b" containerName="keystone-db-sync" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.446024 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea097e48-d917-409a-befa-14d0ba6dc67b" containerName="keystone-db-sync" Dec 11 18:17:33 crc kubenswrapper[4877]: E1211 18:17:33.446033 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91672b3b-2705-49d0-8905-08768f60ab7b" containerName="ovn-config" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.446040 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="91672b3b-2705-49d0-8905-08768f60ab7b" containerName="ovn-config" Dec 11 18:17:33 crc kubenswrapper[4877]: E1211 18:17:33.446058 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="053e2796-2bef-48ed-a1c2-47917558ad1a" containerName="mariadb-account-create-update" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.446064 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="053e2796-2bef-48ed-a1c2-47917558ad1a" containerName="mariadb-account-create-update" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.446261 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7" containerName="mariadb-database-create" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.446281 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6f25ca-ff23-47cf-99f9-eb8355c546ec" containerName="mariadb-database-create" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.446288 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea097e48-d917-409a-befa-14d0ba6dc67b" containerName="keystone-db-sync" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.446298 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="053e2796-2bef-48ed-a1c2-47917558ad1a" containerName="mariadb-account-create-update" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.446306 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="91672b3b-2705-49d0-8905-08768f60ab7b" containerName="ovn-config" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.446318 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="83845faf-f287-4962-afff-966bfac050eb" containerName="mariadb-account-create-update" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.446328 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c691a5-8b64-4aee-8833-3453f25422ce" containerName="mariadb-account-create-update" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.446339 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aa10530-60c2-46d2-8a52-7422281745bf" containerName="mariadb-database-create" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.447303 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.450662 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.458751 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9m9np"] Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.574839 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea097e48-d917-409a-befa-14d0ba6dc67b-combined-ca-bundle\") pod \"ea097e48-d917-409a-befa-14d0ba6dc67b\" (UID: \"ea097e48-d917-409a-befa-14d0ba6dc67b\") " Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.575039 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9pfp\" (UniqueName: \"kubernetes.io/projected/ea097e48-d917-409a-befa-14d0ba6dc67b-kube-api-access-w9pfp\") pod \"ea097e48-d917-409a-befa-14d0ba6dc67b\" (UID: \"ea097e48-d917-409a-befa-14d0ba6dc67b\") " Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.575076 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea097e48-d917-409a-befa-14d0ba6dc67b-config-data\") pod \"ea097e48-d917-409a-befa-14d0ba6dc67b\" (UID: \"ea097e48-d917-409a-befa-14d0ba6dc67b\") " Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.575388 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.575469 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.575523 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.575549 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-config\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.575569 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tr9g\" (UniqueName: \"kubernetes.io/projected/79ef7d52-d31a-4031-b327-8952bdfd2313-kube-api-access-2tr9g\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.575598 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-dns-svc\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.582323 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea097e48-d917-409a-befa-14d0ba6dc67b-kube-api-access-w9pfp" (OuterVolumeSpecName: "kube-api-access-w9pfp") pod "ea097e48-d917-409a-befa-14d0ba6dc67b" (UID: "ea097e48-d917-409a-befa-14d0ba6dc67b"). InnerVolumeSpecName "kube-api-access-w9pfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.615982 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea097e48-d917-409a-befa-14d0ba6dc67b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea097e48-d917-409a-befa-14d0ba6dc67b" (UID: "ea097e48-d917-409a-befa-14d0ba6dc67b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.630738 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea097e48-d917-409a-befa-14d0ba6dc67b-config-data" (OuterVolumeSpecName: "config-data") pod "ea097e48-d917-409a-befa-14d0ba6dc67b" (UID: "ea097e48-d917-409a-befa-14d0ba6dc67b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.677486 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.677614 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.677708 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.677753 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-config\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.677789 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tr9g\" (UniqueName: \"kubernetes.io/projected/79ef7d52-d31a-4031-b327-8952bdfd2313-kube-api-access-2tr9g\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.677840 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-dns-svc\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.677938 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9pfp\" (UniqueName: \"kubernetes.io/projected/ea097e48-d917-409a-befa-14d0ba6dc67b-kube-api-access-w9pfp\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.677961 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea097e48-d917-409a-befa-14d0ba6dc67b-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.677979 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea097e48-d917-409a-befa-14d0ba6dc67b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.679260 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-dns-svc\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.679738 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.679253 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.680419 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-config\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.680444 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.704458 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tr9g\" (UniqueName: \"kubernetes.io/projected/79ef7d52-d31a-4031-b327-8952bdfd2313-kube-api-access-2tr9g\") pod \"dnsmasq-dns-764c5664d7-9m9np\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:33 crc kubenswrapper[4877]: I1211 18:17:33.776391 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.002477 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6m6wj" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.002729 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6m6wj" event={"ID":"ea097e48-d917-409a-befa-14d0ba6dc67b","Type":"ContainerDied","Data":"82066feaadd4dc39611c2942234f4fa1563252b5038301499358785b3f9ce680"} Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.002769 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82066feaadd4dc39611c2942234f4fa1563252b5038301499358785b3f9ce680" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.213336 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9m9np"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.250867 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-c5gzn"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.252759 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.267094 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4ck92"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.268411 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.271571 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.271768 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.271917 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4dcm7" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.272031 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.276692 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.293764 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-c5gzn"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.307694 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4ck92"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.394128 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9m9np"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.442574 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-combined-ca-bundle\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.442956 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.443003 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-fernet-keys\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.443042 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-config-data\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.443083 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.443106 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-credential-keys\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.443129 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.443157 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85dq6\" (UniqueName: \"kubernetes.io/projected/0e757aad-15bc-4e9b-950a-204c7cf9102c-kube-api-access-85dq6\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.443178 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tg29\" (UniqueName: \"kubernetes.io/projected/e5b67266-82f7-4264-92bf-87e7685bb26a-kube-api-access-4tg29\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.443205 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-config\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.443228 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-scripts\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.443258 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-dns-svc\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.501471 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-vtwqc"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.506015 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.513790 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6m4g8" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.514037 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.514289 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.544797 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-scripts\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.544850 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-dns-svc\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.544907 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-combined-ca-bundle\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.544944 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.544972 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-fernet-keys\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.544999 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-config-data\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.545031 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.545050 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-credential-keys\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.545071 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.545096 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85dq6\" (UniqueName: \"kubernetes.io/projected/0e757aad-15bc-4e9b-950a-204c7cf9102c-kube-api-access-85dq6\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.545115 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tg29\" (UniqueName: \"kubernetes.io/projected/e5b67266-82f7-4264-92bf-87e7685bb26a-kube-api-access-4tg29\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.545137 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-config\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.545940 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-config\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.546170 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.546712 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.580215 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-credential-keys\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.581042 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-scripts\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.581261 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-combined-ca-bundle\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.581299 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-fernet-keys\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.582154 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-config-data\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.583722 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-dns-svc\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.583793 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7ff8bd457f-dgbx4"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.585669 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tg29\" (UniqueName: \"kubernetes.io/projected/e5b67266-82f7-4264-92bf-87e7685bb26a-kube-api-access-4tg29\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.594259 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-c5gzn\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.600475 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.613364 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85dq6\" (UniqueName: \"kubernetes.io/projected/0e757aad-15bc-4e9b-950a-204c7cf9102c-kube-api-access-85dq6\") pod \"keystone-bootstrap-4ck92\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.614648 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.616926 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.617918 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.617977 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-smpcj" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.664045 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-config-data\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.664201 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-scripts\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.664360 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z9l9\" (UniqueName: \"kubernetes.io/projected/2cc9dafb-2cd8-4a57-b7f2-941c39748675-kube-api-access-2z9l9\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.664598 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-combined-ca-bundle\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.664637 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-db-sync-config-data\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.664671 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cc9dafb-2cd8-4a57-b7f2-941c39748675-etc-machine-id\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.740260 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vtwqc"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.760355 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.762002 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.784827 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a74242f8-b432-44a9-8986-85e7dd6b20e8-horizon-secret-key\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.784868 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-scripts\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.784892 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a74242f8-b432-44a9-8986-85e7dd6b20e8-scripts\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.784940 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z9l9\" (UniqueName: \"kubernetes.io/projected/2cc9dafb-2cd8-4a57-b7f2-941c39748675-kube-api-access-2z9l9\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.784970 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z76pr\" (UniqueName: \"kubernetes.io/projected/a74242f8-b432-44a9-8986-85e7dd6b20e8-kube-api-access-z76pr\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.784988 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a74242f8-b432-44a9-8986-85e7dd6b20e8-logs\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.785011 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a74242f8-b432-44a9-8986-85e7dd6b20e8-config-data\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.785040 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-combined-ca-bundle\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.785059 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-db-sync-config-data\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.785098 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cc9dafb-2cd8-4a57-b7f2-941c39748675-etc-machine-id\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.785122 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-config-data\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.789322 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7ff8bd457f-dgbx4"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.789689 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cc9dafb-2cd8-4a57-b7f2-941c39748675-etc-machine-id\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.790907 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-config-data\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.802141 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-combined-ca-bundle\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.803741 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-scripts\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.807011 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-db-sync-config-data\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.831609 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z9l9\" (UniqueName: \"kubernetes.io/projected/2cc9dafb-2cd8-4a57-b7f2-941c39748675-kube-api-access-2z9l9\") pod \"cinder-db-sync-vtwqc\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.840761 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-x4kfs"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.842515 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x4kfs" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.849874 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.850074 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.850202 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-b56vs" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.858529 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-x4kfs"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.876877 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.886073 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.887430 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z76pr\" (UniqueName: \"kubernetes.io/projected/a74242f8-b432-44a9-8986-85e7dd6b20e8-kube-api-access-z76pr\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.887455 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a74242f8-b432-44a9-8986-85e7dd6b20e8-logs\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.887485 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a74242f8-b432-44a9-8986-85e7dd6b20e8-config-data\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.887555 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a74242f8-b432-44a9-8986-85e7dd6b20e8-horizon-secret-key\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.887575 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a74242f8-b432-44a9-8986-85e7dd6b20e8-scripts\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.888440 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a74242f8-b432-44a9-8986-85e7dd6b20e8-scripts\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.889149 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a74242f8-b432-44a9-8986-85e7dd6b20e8-logs\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.889657 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.889943 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a74242f8-b432-44a9-8986-85e7dd6b20e8-config-data\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.895996 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-c5gzn"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.903772 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a74242f8-b432-44a9-8986-85e7dd6b20e8-horizon-secret-key\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.904271 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-bcdfz"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.906191 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.915166 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.918126 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8lj7g" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.918699 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.918829 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.920810 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z76pr\" (UniqueName: \"kubernetes.io/projected/a74242f8-b432-44a9-8986-85e7dd6b20e8-kube-api-access-z76pr\") pod \"horizon-7ff8bd457f-dgbx4\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.922131 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.922295 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:17:34 crc kubenswrapper[4877]: E1211 18:17:34.926032 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b58d9e9-69e6-42e8-86eb-538ac26c6340" containerName="glance-db-sync" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.926067 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b58d9e9-69e6-42e8-86eb-538ac26c6340" containerName="glance-db-sync" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.926246 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b58d9e9-69e6-42e8-86eb-538ac26c6340" containerName="glance-db-sync" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.927405 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.933984 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.933994 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.936071 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bcdfz"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.950621 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.956934 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-rq2nr"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.959275 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.967683 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.979006 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-wgnnt"] Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.980640 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wgnnt" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.983696 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4d64w" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.983985 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.994091 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-config-data\") pod \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.994249 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-db-sync-config-data\") pod \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.994275 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-combined-ca-bundle\") pod \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.994332 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vrt4\" (UniqueName: \"kubernetes.io/projected/9b58d9e9-69e6-42e8-86eb-538ac26c6340-kube-api-access-5vrt4\") pod \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\" (UID: \"9b58d9e9-69e6-42e8-86eb-538ac26c6340\") " Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.994587 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8de39620-4351-442e-afb7-b53270fffe41-config\") pod \"neutron-db-sync-x4kfs\" (UID: \"8de39620-4351-442e-afb7-b53270fffe41\") " pod="openstack/neutron-db-sync-x4kfs" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.994617 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njj6c\" (UniqueName: \"kubernetes.io/projected/49332496-5e7e-426e-9d51-aee9479d8a0d-kube-api-access-njj6c\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.994640 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49332496-5e7e-426e-9d51-aee9479d8a0d-run-httpd\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.994657 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49332496-5e7e-426e-9d51-aee9479d8a0d-log-httpd\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.994701 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de39620-4351-442e-afb7-b53270fffe41-combined-ca-bundle\") pod \"neutron-db-sync-x4kfs\" (UID: \"8de39620-4351-442e-afb7-b53270fffe41\") " pod="openstack/neutron-db-sync-x4kfs" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.994825 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-scripts\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.994897 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-combined-ca-bundle\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.995001 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvghj\" (UniqueName: \"kubernetes.io/projected/8de39620-4351-442e-afb7-b53270fffe41-kube-api-access-mvghj\") pod \"neutron-db-sync-x4kfs\" (UID: \"8de39620-4351-442e-afb7-b53270fffe41\") " pod="openstack/neutron-db-sync-x4kfs" Dec 11 18:17:34 crc kubenswrapper[4877]: I1211 18:17:34.995041 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73110039-1660-4b03-9f07-2469ea7fe039-logs\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.001086 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-scripts\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.001578 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xczd6\" (UniqueName: \"kubernetes.io/projected/73110039-1660-4b03-9f07-2469ea7fe039-kube-api-access-xczd6\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.001734 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-config-data\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.002004 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.006705 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.006793 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-config-data\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.002402 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-rq2nr"] Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.005801 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b58d9e9-69e6-42e8-86eb-538ac26c6340-kube-api-access-5vrt4" (OuterVolumeSpecName: "kube-api-access-5vrt4") pod "9b58d9e9-69e6-42e8-86eb-538ac26c6340" (UID: "9b58d9e9-69e6-42e8-86eb-538ac26c6340"). InnerVolumeSpecName "kube-api-access-5vrt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.007798 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9b58d9e9-69e6-42e8-86eb-538ac26c6340" (UID: "9b58d9e9-69e6-42e8-86eb-538ac26c6340"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.009965 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wgnnt"] Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.025010 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b78ffc5c-cmjhz"] Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.029070 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.034727 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b78ffc5c-cmjhz"] Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.041071 4877 generic.go:334] "Generic (PLEG): container finished" podID="53a860ae-4169-4f47-8ba7-032c96b4be3a" containerID="67127e95e2041d1a7d7ee10db0a8ac49279a0dfa1be2848da530f261425492c9" exitCode=1 Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.041151 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerDied","Data":"67127e95e2041d1a7d7ee10db0a8ac49279a0dfa1be2848da530f261425492c9"} Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.055302 4877 scope.go:117] "RemoveContainer" containerID="67127e95e2041d1a7d7ee10db0a8ac49279a0dfa1be2848da530f261425492c9" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.058421 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7hhdv" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.058352 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7hhdv" event={"ID":"9b58d9e9-69e6-42e8-86eb-538ac26c6340","Type":"ContainerDied","Data":"d5fef6ed2419b6aaaaf41fb542b33b6cdaa206ca0a9e1ca453e6f285c0a3285c"} Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.058589 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5fef6ed2419b6aaaaf41fb542b33b6cdaa206ca0a9e1ca453e6f285c0a3285c" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.065520 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9m9np" event={"ID":"79ef7d52-d31a-4031-b327-8952bdfd2313","Type":"ContainerStarted","Data":"bb4bb902c8cc0877d46e3c33a6430e3bacb5df8e4316f1889389f3bd82e75eec"} Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.074247 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b58d9e9-69e6-42e8-86eb-538ac26c6340" (UID: "9b58d9e9-69e6-42e8-86eb-538ac26c6340"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.108710 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee9614f-acc1-4883-989e-6348978f4641-combined-ca-bundle\") pod \"barbican-db-sync-wgnnt\" (UID: \"fee9614f-acc1-4883-989e-6348978f4641\") " pod="openstack/barbican-db-sync-wgnnt" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.108760 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef4bb395-d817-4cf1-a7b9-692cf1831b79-scripts\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.108794 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-combined-ca-bundle\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.108821 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvghj\" (UniqueName: \"kubernetes.io/projected/8de39620-4351-442e-afb7-b53270fffe41-kube-api-access-mvghj\") pod \"neutron-db-sync-x4kfs\" (UID: \"8de39620-4351-442e-afb7-b53270fffe41\") " pod="openstack/neutron-db-sync-x4kfs" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.108852 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73110039-1660-4b03-9f07-2469ea7fe039-logs\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.108882 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fee9614f-acc1-4883-989e-6348978f4641-db-sync-config-data\") pod \"barbican-db-sync-wgnnt\" (UID: \"fee9614f-acc1-4883-989e-6348978f4641\") " pod="openstack/barbican-db-sync-wgnnt" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.108903 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef4bb395-d817-4cf1-a7b9-692cf1831b79-horizon-secret-key\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.108921 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-scripts\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.108938 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.108971 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xczd6\" (UniqueName: \"kubernetes.io/projected/73110039-1660-4b03-9f07-2469ea7fe039-kube-api-access-xczd6\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109003 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109040 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-config-data\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109064 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkwvw\" (UniqueName: \"kubernetes.io/projected/fee9614f-acc1-4883-989e-6348978f4641-kube-api-access-xkwvw\") pod \"barbican-db-sync-wgnnt\" (UID: \"fee9614f-acc1-4883-989e-6348978f4641\") " pod="openstack/barbican-db-sync-wgnnt" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109081 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef4bb395-d817-4cf1-a7b9-692cf1831b79-logs\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109099 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbxbx\" (UniqueName: \"kubernetes.io/projected/ef4bb395-d817-4cf1-a7b9-692cf1831b79-kube-api-access-bbxbx\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109132 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109151 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109173 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zn7k\" (UniqueName: \"kubernetes.io/projected/44966f79-b411-4c0a-9cc1-fe2576bf06c0-kube-api-access-7zn7k\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109204 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109222 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109242 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-config-data\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109259 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8de39620-4351-442e-afb7-b53270fffe41-config\") pod \"neutron-db-sync-x4kfs\" (UID: \"8de39620-4351-442e-afb7-b53270fffe41\") " pod="openstack/neutron-db-sync-x4kfs" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109275 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njj6c\" (UniqueName: \"kubernetes.io/projected/49332496-5e7e-426e-9d51-aee9479d8a0d-kube-api-access-njj6c\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109297 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49332496-5e7e-426e-9d51-aee9479d8a0d-run-httpd\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109317 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49332496-5e7e-426e-9d51-aee9479d8a0d-log-httpd\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109345 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-config\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109386 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de39620-4351-442e-afb7-b53270fffe41-combined-ca-bundle\") pod \"neutron-db-sync-x4kfs\" (UID: \"8de39620-4351-442e-afb7-b53270fffe41\") " pod="openstack/neutron-db-sync-x4kfs" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109412 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef4bb395-d817-4cf1-a7b9-692cf1831b79-config-data\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109443 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-scripts\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109502 4877 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109516 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.109528 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vrt4\" (UniqueName: \"kubernetes.io/projected/9b58d9e9-69e6-42e8-86eb-538ac26c6340-kube-api-access-5vrt4\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.117167 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49332496-5e7e-426e-9d51-aee9479d8a0d-run-httpd\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.117208 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49332496-5e7e-426e-9d51-aee9479d8a0d-log-httpd\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: E1211 18:17:35.125661 4877 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53a860ae_4169_4f47_8ba7_032c96b4be3a.slice/crio-67127e95e2041d1a7d7ee10db0a8ac49279a0dfa1be2848da530f261425492c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53a860ae_4169_4f47_8ba7_032c96b4be3a.slice/crio-conmon-67127e95e2041d1a7d7ee10db0a8ac49279a0dfa1be2848da530f261425492c9.scope\": RecentStats: unable to find data in memory cache]" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.131092 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73110039-1660-4b03-9f07-2469ea7fe039-logs\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.142113 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xczd6\" (UniqueName: \"kubernetes.io/projected/73110039-1660-4b03-9f07-2469ea7fe039-kube-api-access-xczd6\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.153822 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njj6c\" (UniqueName: \"kubernetes.io/projected/49332496-5e7e-426e-9d51-aee9479d8a0d-kube-api-access-njj6c\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.154999 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de39620-4351-442e-afb7-b53270fffe41-combined-ca-bundle\") pod \"neutron-db-sync-x4kfs\" (UID: \"8de39620-4351-442e-afb7-b53270fffe41\") " pod="openstack/neutron-db-sync-x4kfs" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.159657 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-config-data\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.160267 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-scripts\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.160854 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.161400 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.164317 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8de39620-4351-442e-afb7-b53270fffe41-config\") pod \"neutron-db-sync-x4kfs\" (UID: \"8de39620-4351-442e-afb7-b53270fffe41\") " pod="openstack/neutron-db-sync-x4kfs" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.174172 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-scripts\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.175791 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvghj\" (UniqueName: \"kubernetes.io/projected/8de39620-4351-442e-afb7-b53270fffe41-kube-api-access-mvghj\") pod \"neutron-db-sync-x4kfs\" (UID: \"8de39620-4351-442e-afb7-b53270fffe41\") " pod="openstack/neutron-db-sync-x4kfs" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.176077 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.177700 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-combined-ca-bundle\") pod \"placement-db-sync-bcdfz\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.180320 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-config-data\") pod \"ceilometer-0\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.256137 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.257276 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkwvw\" (UniqueName: \"kubernetes.io/projected/fee9614f-acc1-4883-989e-6348978f4641-kube-api-access-xkwvw\") pod \"barbican-db-sync-wgnnt\" (UID: \"fee9614f-acc1-4883-989e-6348978f4641\") " pod="openstack/barbican-db-sync-wgnnt" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.257325 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef4bb395-d817-4cf1-a7b9-692cf1831b79-logs\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.257349 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbxbx\" (UniqueName: \"kubernetes.io/projected/ef4bb395-d817-4cf1-a7b9-692cf1831b79-kube-api-access-bbxbx\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.257585 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.257621 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zn7k\" (UniqueName: \"kubernetes.io/projected/44966f79-b411-4c0a-9cc1-fe2576bf06c0-kube-api-access-7zn7k\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.257706 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.257801 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-config\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.258767 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef4bb395-d817-4cf1-a7b9-692cf1831b79-config-data\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.258874 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee9614f-acc1-4883-989e-6348978f4641-combined-ca-bundle\") pod \"barbican-db-sync-wgnnt\" (UID: \"fee9614f-acc1-4883-989e-6348978f4641\") " pod="openstack/barbican-db-sync-wgnnt" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.258913 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef4bb395-d817-4cf1-a7b9-692cf1831b79-scripts\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.259004 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fee9614f-acc1-4883-989e-6348978f4641-db-sync-config-data\") pod \"barbican-db-sync-wgnnt\" (UID: \"fee9614f-acc1-4883-989e-6348978f4641\") " pod="openstack/barbican-db-sync-wgnnt" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.259257 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef4bb395-d817-4cf1-a7b9-692cf1831b79-horizon-secret-key\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.259288 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.267592 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-config-data" (OuterVolumeSpecName: "config-data") pod "9b58d9e9-69e6-42e8-86eb-538ac26c6340" (UID: "9b58d9e9-69e6-42e8-86eb-538ac26c6340"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.294607 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.301439 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.306898 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef4bb395-d817-4cf1-a7b9-692cf1831b79-scripts\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.307965 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x4kfs" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.308195 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fee9614f-acc1-4883-989e-6348978f4641-db-sync-config-data\") pod \"barbican-db-sync-wgnnt\" (UID: \"fee9614f-acc1-4883-989e-6348978f4641\") " pod="openstack/barbican-db-sync-wgnnt" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.308964 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.310599 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b58d9e9-69e6-42e8-86eb-538ac26c6340-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.315168 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.317404 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef4bb395-d817-4cf1-a7b9-692cf1831b79-horizon-secret-key\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.327052 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef4bb395-d817-4cf1-a7b9-692cf1831b79-config-data\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.335337 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.359803 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee9614f-acc1-4883-989e-6348978f4641-combined-ca-bundle\") pod \"barbican-db-sync-wgnnt\" (UID: \"fee9614f-acc1-4883-989e-6348978f4641\") " pod="openstack/barbican-db-sync-wgnnt" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.340871 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-config\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.363843 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkwvw\" (UniqueName: \"kubernetes.io/projected/fee9614f-acc1-4883-989e-6348978f4641-kube-api-access-xkwvw\") pod \"barbican-db-sync-wgnnt\" (UID: \"fee9614f-acc1-4883-989e-6348978f4641\") " pod="openstack/barbican-db-sync-wgnnt" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.371155 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef4bb395-d817-4cf1-a7b9-692cf1831b79-logs\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.335649 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bcdfz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.373214 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zn7k\" (UniqueName: \"kubernetes.io/projected/44966f79-b411-4c0a-9cc1-fe2576bf06c0-kube-api-access-7zn7k\") pod \"dnsmasq-dns-58dd9ff6bc-rq2nr\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.375013 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbxbx\" (UniqueName: \"kubernetes.io/projected/ef4bb395-d817-4cf1-a7b9-692cf1831b79-kube-api-access-bbxbx\") pod \"horizon-6b78ffc5c-cmjhz\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.446651 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xg49p"] Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.489607 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wgnnt" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.733103 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.747097 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:35 crc kubenswrapper[4877]: I1211 18:17:35.916512 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-c5gzn"] Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.082135 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerStarted","Data":"2994614fe89fdd0cbb9ef815b7ea11d886f0125684a6e917c03f027f429e3d39"} Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.083283 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.089754 4877 generic.go:334] "Generic (PLEG): container finished" podID="79ef7d52-d31a-4031-b327-8952bdfd2313" containerID="aeda5921313350dc603fb123acbdb2482d8f9bc08c3afd10f7947d7adc0b2441" exitCode=0 Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.090016 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9m9np" event={"ID":"79ef7d52-d31a-4031-b327-8952bdfd2313","Type":"ContainerDied","Data":"aeda5921313350dc603fb123acbdb2482d8f9bc08c3afd10f7947d7adc0b2441"} Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.095771 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" event={"ID":"e5b67266-82f7-4264-92bf-87e7685bb26a","Type":"ContainerStarted","Data":"5640ce5dee861a6949f5b356b66c998860915b981ef1c1f72dd796c4a4737d34"} Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.126856 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vtwqc"] Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.193331 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7ff8bd457f-dgbx4"] Dec 11 18:17:36 crc kubenswrapper[4877]: W1211 18:17:36.241560 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cc9dafb_2cd8_4a57_b7f2_941c39748675.slice/crio-b1f6eefb813a79f4d82ffefe5d8d88d69603ecb8af90ea17a7ffead56dd21927 WatchSource:0}: Error finding container b1f6eefb813a79f4d82ffefe5d8d88d69603ecb8af90ea17a7ffead56dd21927: Status 404 returned error can't find the container with id b1f6eefb813a79f4d82ffefe5d8d88d69603ecb8af90ea17a7ffead56dd21927 Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.256298 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4ck92"] Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.502705 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.505465 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.509872 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.510547 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.510555 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-9r6gv" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.510733 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.603336 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.606114 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.611756 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.624009 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.701347 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfe20951-de12-4570-8cf0-f1c8a14e275d-logs\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.701454 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.701591 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-scripts\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.701854 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.701966 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqhkz\" (UniqueName: \"kubernetes.io/projected/dfe20951-de12-4570-8cf0-f1c8a14e275d-kube-api-access-pqhkz\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.702093 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfe20951-de12-4570-8cf0-f1c8a14e275d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.702111 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-config-data\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.706546 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bcdfz"] Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.720949 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-x4kfs"] Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.751892 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.813748 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.813960 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.814175 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.814680 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.814903 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfe20951-de12-4570-8cf0-f1c8a14e275d-logs\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.815447 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.816062 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-scripts\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.816214 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.816744 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.816975 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqhkz\" (UniqueName: \"kubernetes.io/projected/dfe20951-de12-4570-8cf0-f1c8a14e275d-kube-api-access-pqhkz\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.817057 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzzvc\" (UniqueName: \"kubernetes.io/projected/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-kube-api-access-qzzvc\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.817182 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfe20951-de12-4570-8cf0-f1c8a14e275d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.817343 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-config-data\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.817477 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-logs\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.817078 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.815410 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfe20951-de12-4570-8cf0-f1c8a14e275d-logs\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.818079 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfe20951-de12-4570-8cf0-f1c8a14e275d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.822094 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-scripts\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.827273 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.833074 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-config-data\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.835826 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqhkz\" (UniqueName: \"kubernetes.io/projected/dfe20951-de12-4570-8cf0-f1c8a14e275d-kube-api-access-pqhkz\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.859083 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.868603 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.918140 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-rq2nr"] Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.921685 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.924818 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzzvc\" (UniqueName: \"kubernetes.io/projected/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-kube-api-access-qzzvc\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.924978 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-logs\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.925282 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.925345 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.925416 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.925588 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.925746 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.931025 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.931255 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.931282 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-logs\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.938095 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b78ffc5c-cmjhz"] Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.939311 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.949994 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.950907 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzzvc\" (UniqueName: \"kubernetes.io/projected/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-kube-api-access-qzzvc\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.963154 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wgnnt"] Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.970697 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:36 crc kubenswrapper[4877]: I1211 18:17:36.978564 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.027255 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-dns-svc\") pod \"79ef7d52-d31a-4031-b327-8952bdfd2313\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.027301 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-dns-swift-storage-0\") pod \"79ef7d52-d31a-4031-b327-8952bdfd2313\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.027477 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-config\") pod \"79ef7d52-d31a-4031-b327-8952bdfd2313\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.027521 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tr9g\" (UniqueName: \"kubernetes.io/projected/79ef7d52-d31a-4031-b327-8952bdfd2313-kube-api-access-2tr9g\") pod \"79ef7d52-d31a-4031-b327-8952bdfd2313\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.027666 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-ovsdbserver-nb\") pod \"79ef7d52-d31a-4031-b327-8952bdfd2313\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.027698 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-ovsdbserver-sb\") pod \"79ef7d52-d31a-4031-b327-8952bdfd2313\" (UID: \"79ef7d52-d31a-4031-b327-8952bdfd2313\") " Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.035716 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79ef7d52-d31a-4031-b327-8952bdfd2313-kube-api-access-2tr9g" (OuterVolumeSpecName: "kube-api-access-2tr9g") pod "79ef7d52-d31a-4031-b327-8952bdfd2313" (UID: "79ef7d52-d31a-4031-b327-8952bdfd2313"). InnerVolumeSpecName "kube-api-access-2tr9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.059076 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "79ef7d52-d31a-4031-b327-8952bdfd2313" (UID: "79ef7d52-d31a-4031-b327-8952bdfd2313"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.086227 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79ef7d52-d31a-4031-b327-8952bdfd2313" (UID: "79ef7d52-d31a-4031-b327-8952bdfd2313"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.092831 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-config" (OuterVolumeSpecName: "config") pod "79ef7d52-d31a-4031-b327-8952bdfd2313" (UID: "79ef7d52-d31a-4031-b327-8952bdfd2313"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.102146 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "79ef7d52-d31a-4031-b327-8952bdfd2313" (UID: "79ef7d52-d31a-4031-b327-8952bdfd2313"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.130185 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.130268 4877 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.130284 4877 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.130296 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.130308 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tr9g\" (UniqueName: \"kubernetes.io/projected/79ef7d52-d31a-4031-b327-8952bdfd2313-kube-api-access-2tr9g\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.149466 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "79ef7d52-d31a-4031-b327-8952bdfd2313" (UID: "79ef7d52-d31a-4031-b327-8952bdfd2313"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.155674 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" event={"ID":"44966f79-b411-4c0a-9cc1-fe2576bf06c0","Type":"ContainerStarted","Data":"7dca7e2312d77f8f77d622e89cb8c933b11fc69afadfdf396219302c514bd3cb"} Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.164674 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4ck92" event={"ID":"0e757aad-15bc-4e9b-950a-204c7cf9102c","Type":"ContainerStarted","Data":"9046ad3100a2748ca7171d9ce666df17188450a9dd3d949bb13455728e37ed99"} Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.164741 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4ck92" event={"ID":"0e757aad-15bc-4e9b-950a-204c7cf9102c","Type":"ContainerStarted","Data":"f9cd3940fb7f77779c00aca8a81715186844ea5d1261bddf665f4a1a91db1143"} Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.173620 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ff8bd457f-dgbx4" event={"ID":"a74242f8-b432-44a9-8986-85e7dd6b20e8","Type":"ContainerStarted","Data":"08da815009f40b4171d5c77df039c1cd2bb6b062d95f60261d75918228a0ea0f"} Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.180700 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wgnnt" event={"ID":"fee9614f-acc1-4883-989e-6348978f4641","Type":"ContainerStarted","Data":"90592653e3751c1db6c94a2a95c4ba34f8b66325e581f193b211d748d52bd964"} Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.198883 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vtwqc" event={"ID":"2cc9dafb-2cd8-4a57-b7f2-941c39748675","Type":"ContainerStarted","Data":"b1f6eefb813a79f4d82ffefe5d8d88d69603ecb8af90ea17a7ffead56dd21927"} Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.199944 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4ck92" podStartSLOduration=3.199930595 podStartE2EDuration="3.199930595s" podCreationTimestamp="2025-12-11 18:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:17:37.196530184 +0000 UTC m=+1018.222774228" watchObservedRunningTime="2025-12-11 18:17:37.199930595 +0000 UTC m=+1018.226174639" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.209614 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x4kfs" event={"ID":"8de39620-4351-442e-afb7-b53270fffe41","Type":"ContainerStarted","Data":"2932460f61b2adc9e522b23130551a011742d2ecaa94d84e6cccdbd01c7bcf75"} Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.209676 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x4kfs" event={"ID":"8de39620-4351-442e-afb7-b53270fffe41","Type":"ContainerStarted","Data":"a10ef3555267d3f3d5cafb4a5f0688ab82b23b360ba20cbb703ccbed6f54531b"} Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.213733 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b78ffc5c-cmjhz" event={"ID":"ef4bb395-d817-4cf1-a7b9-692cf1831b79","Type":"ContainerStarted","Data":"beb84b7d97e05160b03fa69f55ba5bc3e075108faef238ad4b2df5e39002d9a2"} Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.218766 4877 generic.go:334] "Generic (PLEG): container finished" podID="e5b67266-82f7-4264-92bf-87e7685bb26a" containerID="fe8dcb2229c6e95a1e12b63a06b990f6c04ea5eaf00784d93dd9dda70ab8da7c" exitCode=0 Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.226940 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9m9np" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.228124 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xg49p" podUID="5db9e624-35cd-40fd-b0f8-52b00006e5c6" containerName="registry-server" containerID="cri-o://22acdadae75d5eb509857d8a23344618c24fa57f09eb03cfe634e1dcad237a86" gracePeriod=2 Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.236629 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79ef7d52-d31a-4031-b327-8952bdfd2313-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.245458 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-x4kfs" podStartSLOduration=3.245433799 podStartE2EDuration="3.245433799s" podCreationTimestamp="2025-12-11 18:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:17:37.225164738 +0000 UTC m=+1018.251408802" watchObservedRunningTime="2025-12-11 18:17:37.245433799 +0000 UTC m=+1018.271677843" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.269444 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" event={"ID":"e5b67266-82f7-4264-92bf-87e7685bb26a","Type":"ContainerDied","Data":"fe8dcb2229c6e95a1e12b63a06b990f6c04ea5eaf00784d93dd9dda70ab8da7c"} Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.269524 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bcdfz" event={"ID":"73110039-1660-4b03-9f07-2469ea7fe039","Type":"ContainerStarted","Data":"fa13356af6b8de8a25e5e4865eb097e20a6c8da2a1bc7f7d4f5d4ce40e847389"} Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.269542 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49332496-5e7e-426e-9d51-aee9479d8a0d","Type":"ContainerStarted","Data":"2955b63b1215856318e96a1fc65c62d958bc39cd035f76721125bd3f9a753b28"} Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.269552 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9m9np" event={"ID":"79ef7d52-d31a-4031-b327-8952bdfd2313","Type":"ContainerDied","Data":"bb4bb902c8cc0877d46e3c33a6430e3bacb5df8e4316f1889389f3bd82e75eec"} Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.269576 4877 scope.go:117] "RemoveContainer" containerID="aeda5921313350dc603fb123acbdb2482d8f9bc08c3afd10f7947d7adc0b2441" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.278636 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.323146 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9m9np"] Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.337944 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9m9np"] Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.566741 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:17:37 crc kubenswrapper[4877]: W1211 18:17:37.580305 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfe20951_de12_4570_8cf0_f1c8a14e275d.slice/crio-c2ad2ef72136d7042e395e21db095a207703be222d496192d1f0267dba02e7fe WatchSource:0}: Error finding container c2ad2ef72136d7042e395e21db095a207703be222d496192d1f0267dba02e7fe: Status 404 returned error can't find the container with id c2ad2ef72136d7042e395e21db095a207703be222d496192d1f0267dba02e7fe Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.821864 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.960406 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-ovsdbserver-sb\") pod \"e5b67266-82f7-4264-92bf-87e7685bb26a\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.964596 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-dns-svc\") pod \"e5b67266-82f7-4264-92bf-87e7685bb26a\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.964813 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-ovsdbserver-nb\") pod \"e5b67266-82f7-4264-92bf-87e7685bb26a\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.965330 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tg29\" (UniqueName: \"kubernetes.io/projected/e5b67266-82f7-4264-92bf-87e7685bb26a-kube-api-access-4tg29\") pod \"e5b67266-82f7-4264-92bf-87e7685bb26a\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.965555 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-config\") pod \"e5b67266-82f7-4264-92bf-87e7685bb26a\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.965610 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-dns-swift-storage-0\") pod \"e5b67266-82f7-4264-92bf-87e7685bb26a\" (UID: \"e5b67266-82f7-4264-92bf-87e7685bb26a\") " Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.972184 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5b67266-82f7-4264-92bf-87e7685bb26a-kube-api-access-4tg29" (OuterVolumeSpecName: "kube-api-access-4tg29") pod "e5b67266-82f7-4264-92bf-87e7685bb26a" (UID: "e5b67266-82f7-4264-92bf-87e7685bb26a"). InnerVolumeSpecName "kube-api-access-4tg29". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.992778 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-config" (OuterVolumeSpecName: "config") pod "e5b67266-82f7-4264-92bf-87e7685bb26a" (UID: "e5b67266-82f7-4264-92bf-87e7685bb26a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:37 crc kubenswrapper[4877]: I1211 18:17:37.997256 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.021006 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e5b67266-82f7-4264-92bf-87e7685bb26a" (UID: "e5b67266-82f7-4264-92bf-87e7685bb26a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.022062 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e5b67266-82f7-4264-92bf-87e7685bb26a" (UID: "e5b67266-82f7-4264-92bf-87e7685bb26a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.031312 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e5b67266-82f7-4264-92bf-87e7685bb26a" (UID: "e5b67266-82f7-4264-92bf-87e7685bb26a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.051748 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e5b67266-82f7-4264-92bf-87e7685bb26a" (UID: "e5b67266-82f7-4264-92bf-87e7685bb26a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.069389 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.069440 4877 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.069451 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.069460 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tg29\" (UniqueName: \"kubernetes.io/projected/e5b67266-82f7-4264-92bf-87e7685bb26a-kube-api-access-4tg29\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.069470 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.069479 4877 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5b67266-82f7-4264-92bf-87e7685bb26a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.077080 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.170860 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db9e624-35cd-40fd-b0f8-52b00006e5c6-utilities\") pod \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\" (UID: \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\") " Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.171036 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9hs5\" (UniqueName: \"kubernetes.io/projected/5db9e624-35cd-40fd-b0f8-52b00006e5c6-kube-api-access-j9hs5\") pod \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\" (UID: \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\") " Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.171211 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db9e624-35cd-40fd-b0f8-52b00006e5c6-catalog-content\") pod \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\" (UID: \"5db9e624-35cd-40fd-b0f8-52b00006e5c6\") " Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.173523 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db9e624-35cd-40fd-b0f8-52b00006e5c6-utilities" (OuterVolumeSpecName: "utilities") pod "5db9e624-35cd-40fd-b0f8-52b00006e5c6" (UID: "5db9e624-35cd-40fd-b0f8-52b00006e5c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.175759 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db9e624-35cd-40fd-b0f8-52b00006e5c6-kube-api-access-j9hs5" (OuterVolumeSpecName: "kube-api-access-j9hs5") pod "5db9e624-35cd-40fd-b0f8-52b00006e5c6" (UID: "5db9e624-35cd-40fd-b0f8-52b00006e5c6"). InnerVolumeSpecName "kube-api-access-j9hs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.195366 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db9e624-35cd-40fd-b0f8-52b00006e5c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5db9e624-35cd-40fd-b0f8-52b00006e5c6" (UID: "5db9e624-35cd-40fd-b0f8-52b00006e5c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.247482 4877 generic.go:334] "Generic (PLEG): container finished" podID="44966f79-b411-4c0a-9cc1-fe2576bf06c0" containerID="d73af684d035c1f38fc4100e8c17989afc3e5c317f45a09627354ad6c588c8a2" exitCode=0 Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.246929 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" event={"ID":"44966f79-b411-4c0a-9cc1-fe2576bf06c0","Type":"ContainerDied","Data":"d73af684d035c1f38fc4100e8c17989afc3e5c317f45a09627354ad6c588c8a2"} Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.258046 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfe20951-de12-4570-8cf0-f1c8a14e275d","Type":"ContainerStarted","Data":"c2ad2ef72136d7042e395e21db095a207703be222d496192d1f0267dba02e7fe"} Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.274118 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" event={"ID":"e5b67266-82f7-4264-92bf-87e7685bb26a","Type":"ContainerDied","Data":"5640ce5dee861a6949f5b356b66c998860915b981ef1c1f72dd796c4a4737d34"} Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.274198 4877 scope.go:117] "RemoveContainer" containerID="fe8dcb2229c6e95a1e12b63a06b990f6c04ea5eaf00784d93dd9dda70ab8da7c" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.274325 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db9e624-35cd-40fd-b0f8-52b00006e5c6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.274356 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db9e624-35cd-40fd-b0f8-52b00006e5c6-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.274366 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9hs5\" (UniqueName: \"kubernetes.io/projected/5db9e624-35cd-40fd-b0f8-52b00006e5c6-kube-api-access-j9hs5\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.274339 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-c5gzn" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.313744 4877 generic.go:334] "Generic (PLEG): container finished" podID="5db9e624-35cd-40fd-b0f8-52b00006e5c6" containerID="22acdadae75d5eb509857d8a23344618c24fa57f09eb03cfe634e1dcad237a86" exitCode=0 Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.313923 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xg49p" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.314184 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xg49p" event={"ID":"5db9e624-35cd-40fd-b0f8-52b00006e5c6","Type":"ContainerDied","Data":"22acdadae75d5eb509857d8a23344618c24fa57f09eb03cfe634e1dcad237a86"} Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.314238 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xg49p" event={"ID":"5db9e624-35cd-40fd-b0f8-52b00006e5c6","Type":"ContainerDied","Data":"4d2933e4f0da8e69cd070cc0c11479523389695f4b63c613dff99a5384bf33af"} Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.360588 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de1228cc-3f6d-49a9-b97a-7a0e79668c1a","Type":"ContainerStarted","Data":"97cbd867d6042bb9ab6143b7c08a20ce2cf12c2f96a67faaf436df2e3f8db3ef"} Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.394827 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-c5gzn"] Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.402875 4877 scope.go:117] "RemoveContainer" containerID="22acdadae75d5eb509857d8a23344618c24fa57f09eb03cfe634e1dcad237a86" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.407837 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-c5gzn"] Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.420047 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xg49p"] Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.430256 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xg49p"] Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.510732 4877 scope.go:117] "RemoveContainer" containerID="74b6213c1e7b8c1dd67509ada0e7bfe08990ff5220a947bb53c1b26eae3eb9c2" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.732467 4877 scope.go:117] "RemoveContainer" containerID="3e416fa0c4700d91d71b394f9d66e3ed50a754cab7a806eea8befb7b84e83242" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.814781 4877 scope.go:117] "RemoveContainer" containerID="22acdadae75d5eb509857d8a23344618c24fa57f09eb03cfe634e1dcad237a86" Dec 11 18:17:38 crc kubenswrapper[4877]: E1211 18:17:38.816571 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22acdadae75d5eb509857d8a23344618c24fa57f09eb03cfe634e1dcad237a86\": container with ID starting with 22acdadae75d5eb509857d8a23344618c24fa57f09eb03cfe634e1dcad237a86 not found: ID does not exist" containerID="22acdadae75d5eb509857d8a23344618c24fa57f09eb03cfe634e1dcad237a86" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.816634 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22acdadae75d5eb509857d8a23344618c24fa57f09eb03cfe634e1dcad237a86"} err="failed to get container status \"22acdadae75d5eb509857d8a23344618c24fa57f09eb03cfe634e1dcad237a86\": rpc error: code = NotFound desc = could not find container \"22acdadae75d5eb509857d8a23344618c24fa57f09eb03cfe634e1dcad237a86\": container with ID starting with 22acdadae75d5eb509857d8a23344618c24fa57f09eb03cfe634e1dcad237a86 not found: ID does not exist" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.816672 4877 scope.go:117] "RemoveContainer" containerID="74b6213c1e7b8c1dd67509ada0e7bfe08990ff5220a947bb53c1b26eae3eb9c2" Dec 11 18:17:38 crc kubenswrapper[4877]: E1211 18:17:38.817542 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74b6213c1e7b8c1dd67509ada0e7bfe08990ff5220a947bb53c1b26eae3eb9c2\": container with ID starting with 74b6213c1e7b8c1dd67509ada0e7bfe08990ff5220a947bb53c1b26eae3eb9c2 not found: ID does not exist" containerID="74b6213c1e7b8c1dd67509ada0e7bfe08990ff5220a947bb53c1b26eae3eb9c2" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.817582 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74b6213c1e7b8c1dd67509ada0e7bfe08990ff5220a947bb53c1b26eae3eb9c2"} err="failed to get container status \"74b6213c1e7b8c1dd67509ada0e7bfe08990ff5220a947bb53c1b26eae3eb9c2\": rpc error: code = NotFound desc = could not find container \"74b6213c1e7b8c1dd67509ada0e7bfe08990ff5220a947bb53c1b26eae3eb9c2\": container with ID starting with 74b6213c1e7b8c1dd67509ada0e7bfe08990ff5220a947bb53c1b26eae3eb9c2 not found: ID does not exist" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.817598 4877 scope.go:117] "RemoveContainer" containerID="3e416fa0c4700d91d71b394f9d66e3ed50a754cab7a806eea8befb7b84e83242" Dec 11 18:17:38 crc kubenswrapper[4877]: E1211 18:17:38.818046 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e416fa0c4700d91d71b394f9d66e3ed50a754cab7a806eea8befb7b84e83242\": container with ID starting with 3e416fa0c4700d91d71b394f9d66e3ed50a754cab7a806eea8befb7b84e83242 not found: ID does not exist" containerID="3e416fa0c4700d91d71b394f9d66e3ed50a754cab7a806eea8befb7b84e83242" Dec 11 18:17:38 crc kubenswrapper[4877]: I1211 18:17:38.818111 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e416fa0c4700d91d71b394f9d66e3ed50a754cab7a806eea8befb7b84e83242"} err="failed to get container status \"3e416fa0c4700d91d71b394f9d66e3ed50a754cab7a806eea8befb7b84e83242\": rpc error: code = NotFound desc = could not find container \"3e416fa0c4700d91d71b394f9d66e3ed50a754cab7a806eea8befb7b84e83242\": container with ID starting with 3e416fa0c4700d91d71b394f9d66e3ed50a754cab7a806eea8befb7b84e83242 not found: ID does not exist" Dec 11 18:17:39 crc kubenswrapper[4877]: I1211 18:17:39.222520 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qv9nh" podUID="d76e4cfb-c4b9-464c-be7e-440efa73932e" containerName="registry-server" probeResult="failure" output=< Dec 11 18:17:39 crc kubenswrapper[4877]: timeout: failed to connect service ":50051" within 1s Dec 11 18:17:39 crc kubenswrapper[4877]: > Dec 11 18:17:39 crc kubenswrapper[4877]: I1211 18:17:39.239391 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db9e624-35cd-40fd-b0f8-52b00006e5c6" path="/var/lib/kubelet/pods/5db9e624-35cd-40fd-b0f8-52b00006e5c6/volumes" Dec 11 18:17:39 crc kubenswrapper[4877]: I1211 18:17:39.240328 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79ef7d52-d31a-4031-b327-8952bdfd2313" path="/var/lib/kubelet/pods/79ef7d52-d31a-4031-b327-8952bdfd2313/volumes" Dec 11 18:17:39 crc kubenswrapper[4877]: I1211 18:17:39.240872 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5b67266-82f7-4264-92bf-87e7685bb26a" path="/var/lib/kubelet/pods/e5b67266-82f7-4264-92bf-87e7685bb26a/volumes" Dec 11 18:17:39 crc kubenswrapper[4877]: I1211 18:17:39.399878 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" event={"ID":"44966f79-b411-4c0a-9cc1-fe2576bf06c0","Type":"ContainerStarted","Data":"1f2e24c59c929a410eb9008ece01b66a9fb7c5310433a2a1582056a5f0e50580"} Dec 11 18:17:39 crc kubenswrapper[4877]: I1211 18:17:39.402264 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:39 crc kubenswrapper[4877]: I1211 18:17:39.430263 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfe20951-de12-4570-8cf0-f1c8a14e275d","Type":"ContainerStarted","Data":"d26a6b33c6fdf7e6a82dc4aa7dee9125c50384a74bbc59e7915228280721bfe3"} Dec 11 18:17:39 crc kubenswrapper[4877]: I1211 18:17:39.486361 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" podStartSLOduration=5.486326802 podStartE2EDuration="5.486326802s" podCreationTimestamp="2025-12-11 18:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:17:39.461658224 +0000 UTC m=+1020.487902278" watchObservedRunningTime="2025-12-11 18:17:39.486326802 +0000 UTC m=+1020.512570846" Dec 11 18:17:40 crc kubenswrapper[4877]: I1211 18:17:40.449269 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de1228cc-3f6d-49a9-b97a-7a0e79668c1a","Type":"ContainerStarted","Data":"6921e39eca0aacb0cecf266043df38b33ad27236c65708211e8368e36d62f77f"} Dec 11 18:17:41 crc kubenswrapper[4877]: I1211 18:17:41.142618 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:17:41 crc kubenswrapper[4877]: I1211 18:17:41.465138 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de1228cc-3f6d-49a9-b97a-7a0e79668c1a","Type":"ContainerStarted","Data":"3dd06d10b774b28213245a9825e2ac4f371d11e2ec18cc1e60b76c6ff4034e6e"} Dec 11 18:17:41 crc kubenswrapper[4877]: I1211 18:17:41.470435 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfe20951-de12-4570-8cf0-f1c8a14e275d","Type":"ContainerStarted","Data":"7d6c8ecb0b458dade431b95c837178aa73f66be5786b31ef00bbfdca183d49b2"} Dec 11 18:17:41 crc kubenswrapper[4877]: I1211 18:17:41.492317 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.492289605 podStartE2EDuration="6.492289605s" podCreationTimestamp="2025-12-11 18:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:17:41.483193573 +0000 UTC m=+1022.509437627" watchObservedRunningTime="2025-12-11 18:17:41.492289605 +0000 UTC m=+1022.518533649" Dec 11 18:17:41 crc kubenswrapper[4877]: I1211 18:17:41.520171 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.520145578 podStartE2EDuration="6.520145578s" podCreationTimestamp="2025-12-11 18:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:17:41.512793402 +0000 UTC m=+1022.539037446" watchObservedRunningTime="2025-12-11 18:17:41.520145578 +0000 UTC m=+1022.546389622" Dec 11 18:17:42 crc kubenswrapper[4877]: I1211 18:17:42.488597 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4ck92" event={"ID":"0e757aad-15bc-4e9b-950a-204c7cf9102c","Type":"ContainerDied","Data":"9046ad3100a2748ca7171d9ce666df17188450a9dd3d949bb13455728e37ed99"} Dec 11 18:17:42 crc kubenswrapper[4877]: I1211 18:17:42.488728 4877 generic.go:334] "Generic (PLEG): container finished" podID="0e757aad-15bc-4e9b-950a-204c7cf9102c" containerID="9046ad3100a2748ca7171d9ce666df17188450a9dd3d949bb13455728e37ed99" exitCode=0 Dec 11 18:17:45 crc kubenswrapper[4877]: I1211 18:17:45.750731 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:17:45 crc kubenswrapper[4877]: I1211 18:17:45.873707 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-k8mn8"] Dec 11 18:17:45 crc kubenswrapper[4877]: I1211 18:17:45.874000 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-k8mn8" podUID="50d772db-0b17-4c84-b0b9-29deb9a368f2" containerName="dnsmasq-dns" containerID="cri-o://38e03a991b1e58fcd21737abead67fc68512c2e76c0adbd8d6305f5f91ef11d3" gracePeriod=10 Dec 11 18:17:46 crc kubenswrapper[4877]: I1211 18:17:46.545668 4877 generic.go:334] "Generic (PLEG): container finished" podID="50d772db-0b17-4c84-b0b9-29deb9a368f2" containerID="38e03a991b1e58fcd21737abead67fc68512c2e76c0adbd8d6305f5f91ef11d3" exitCode=0 Dec 11 18:17:46 crc kubenswrapper[4877]: I1211 18:17:46.545776 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-k8mn8" event={"ID":"50d772db-0b17-4c84-b0b9-29deb9a368f2","Type":"ContainerDied","Data":"38e03a991b1e58fcd21737abead67fc68512c2e76c0adbd8d6305f5f91ef11d3"} Dec 11 18:17:46 crc kubenswrapper[4877]: I1211 18:17:46.870211 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 18:17:46 crc kubenswrapper[4877]: I1211 18:17:46.870281 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 18:17:46 crc kubenswrapper[4877]: I1211 18:17:46.906016 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 18:17:46 crc kubenswrapper[4877]: I1211 18:17:46.916386 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 18:17:47 crc kubenswrapper[4877]: I1211 18:17:47.280802 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 18:17:47 crc kubenswrapper[4877]: I1211 18:17:47.280862 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 18:17:47 crc kubenswrapper[4877]: I1211 18:17:47.318518 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-k8mn8" podUID="50d772db-0b17-4c84-b0b9-29deb9a368f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Dec 11 18:17:47 crc kubenswrapper[4877]: I1211 18:17:47.323024 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 18:17:47 crc kubenswrapper[4877]: I1211 18:17:47.330138 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 18:17:47 crc kubenswrapper[4877]: I1211 18:17:47.556231 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 18:17:47 crc kubenswrapper[4877]: I1211 18:17:47.556303 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 18:17:47 crc kubenswrapper[4877]: I1211 18:17:47.556319 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 18:17:47 crc kubenswrapper[4877]: I1211 18:17:47.556334 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 18:17:48 crc kubenswrapper[4877]: I1211 18:17:48.215428 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:48 crc kubenswrapper[4877]: I1211 18:17:48.267570 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:17:48 crc kubenswrapper[4877]: I1211 18:17:48.914512 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.088873 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-config-data\") pod \"0e757aad-15bc-4e9b-950a-204c7cf9102c\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.089008 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-fernet-keys\") pod \"0e757aad-15bc-4e9b-950a-204c7cf9102c\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.089053 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-scripts\") pod \"0e757aad-15bc-4e9b-950a-204c7cf9102c\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.089780 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-combined-ca-bundle\") pod \"0e757aad-15bc-4e9b-950a-204c7cf9102c\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.089865 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-credential-keys\") pod \"0e757aad-15bc-4e9b-950a-204c7cf9102c\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.089965 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85dq6\" (UniqueName: \"kubernetes.io/projected/0e757aad-15bc-4e9b-950a-204c7cf9102c-kube-api-access-85dq6\") pod \"0e757aad-15bc-4e9b-950a-204c7cf9102c\" (UID: \"0e757aad-15bc-4e9b-950a-204c7cf9102c\") " Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.097116 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0e757aad-15bc-4e9b-950a-204c7cf9102c" (UID: "0e757aad-15bc-4e9b-950a-204c7cf9102c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.097171 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-scripts" (OuterVolumeSpecName: "scripts") pod "0e757aad-15bc-4e9b-950a-204c7cf9102c" (UID: "0e757aad-15bc-4e9b-950a-204c7cf9102c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.097973 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0e757aad-15bc-4e9b-950a-204c7cf9102c" (UID: "0e757aad-15bc-4e9b-950a-204c7cf9102c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.111310 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e757aad-15bc-4e9b-950a-204c7cf9102c-kube-api-access-85dq6" (OuterVolumeSpecName: "kube-api-access-85dq6") pod "0e757aad-15bc-4e9b-950a-204c7cf9102c" (UID: "0e757aad-15bc-4e9b-950a-204c7cf9102c"). InnerVolumeSpecName "kube-api-access-85dq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.126619 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-config-data" (OuterVolumeSpecName: "config-data") pod "0e757aad-15bc-4e9b-950a-204c7cf9102c" (UID: "0e757aad-15bc-4e9b-950a-204c7cf9102c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.142603 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e757aad-15bc-4e9b-950a-204c7cf9102c" (UID: "0e757aad-15bc-4e9b-950a-204c7cf9102c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.194033 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85dq6\" (UniqueName: \"kubernetes.io/projected/0e757aad-15bc-4e9b-950a-204c7cf9102c-kube-api-access-85dq6\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.194080 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.194095 4877 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.194107 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.194119 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.194129 4877 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e757aad-15bc-4e9b-950a-204c7cf9102c-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.313838 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cblzw"] Dec 11 18:17:49 crc kubenswrapper[4877]: E1211 18:17:49.315916 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db9e624-35cd-40fd-b0f8-52b00006e5c6" containerName="extract-content" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.316067 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db9e624-35cd-40fd-b0f8-52b00006e5c6" containerName="extract-content" Dec 11 18:17:49 crc kubenswrapper[4877]: E1211 18:17:49.316123 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e757aad-15bc-4e9b-950a-204c7cf9102c" containerName="keystone-bootstrap" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.316134 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e757aad-15bc-4e9b-950a-204c7cf9102c" containerName="keystone-bootstrap" Dec 11 18:17:49 crc kubenswrapper[4877]: E1211 18:17:49.316172 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b67266-82f7-4264-92bf-87e7685bb26a" containerName="init" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.316202 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b67266-82f7-4264-92bf-87e7685bb26a" containerName="init" Dec 11 18:17:49 crc kubenswrapper[4877]: E1211 18:17:49.316227 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ef7d52-d31a-4031-b327-8952bdfd2313" containerName="init" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.316237 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ef7d52-d31a-4031-b327-8952bdfd2313" containerName="init" Dec 11 18:17:49 crc kubenswrapper[4877]: E1211 18:17:49.316303 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db9e624-35cd-40fd-b0f8-52b00006e5c6" containerName="registry-server" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.316313 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db9e624-35cd-40fd-b0f8-52b00006e5c6" containerName="registry-server" Dec 11 18:17:49 crc kubenswrapper[4877]: E1211 18:17:49.316462 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db9e624-35cd-40fd-b0f8-52b00006e5c6" containerName="extract-utilities" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.316477 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db9e624-35cd-40fd-b0f8-52b00006e5c6" containerName="extract-utilities" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.317732 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5b67266-82f7-4264-92bf-87e7685bb26a" containerName="init" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.317801 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db9e624-35cd-40fd-b0f8-52b00006e5c6" containerName="registry-server" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.317911 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e757aad-15bc-4e9b-950a-204c7cf9102c" containerName="keystone-bootstrap" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.317941 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ef7d52-d31a-4031-b327-8952bdfd2313" containerName="init" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.329170 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.353212 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cblzw"] Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.414050 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnt5n\" (UniqueName: \"kubernetes.io/projected/48526cd8-976b-46b0-a73c-eb463d914400-kube-api-access-vnt5n\") pod \"community-operators-cblzw\" (UID: \"48526cd8-976b-46b0-a73c-eb463d914400\") " pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.414158 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48526cd8-976b-46b0-a73c-eb463d914400-utilities\") pod \"community-operators-cblzw\" (UID: \"48526cd8-976b-46b0-a73c-eb463d914400\") " pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.414222 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48526cd8-976b-46b0-a73c-eb463d914400-catalog-content\") pod \"community-operators-cblzw\" (UID: \"48526cd8-976b-46b0-a73c-eb463d914400\") " pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.515522 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnt5n\" (UniqueName: \"kubernetes.io/projected/48526cd8-976b-46b0-a73c-eb463d914400-kube-api-access-vnt5n\") pod \"community-operators-cblzw\" (UID: \"48526cd8-976b-46b0-a73c-eb463d914400\") " pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.515607 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48526cd8-976b-46b0-a73c-eb463d914400-utilities\") pod \"community-operators-cblzw\" (UID: \"48526cd8-976b-46b0-a73c-eb463d914400\") " pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.515669 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48526cd8-976b-46b0-a73c-eb463d914400-catalog-content\") pod \"community-operators-cblzw\" (UID: \"48526cd8-976b-46b0-a73c-eb463d914400\") " pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.516065 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48526cd8-976b-46b0-a73c-eb463d914400-catalog-content\") pod \"community-operators-cblzw\" (UID: \"48526cd8-976b-46b0-a73c-eb463d914400\") " pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.516295 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48526cd8-976b-46b0-a73c-eb463d914400-utilities\") pod \"community-operators-cblzw\" (UID: \"48526cd8-976b-46b0-a73c-eb463d914400\") " pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.544051 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnt5n\" (UniqueName: \"kubernetes.io/projected/48526cd8-976b-46b0-a73c-eb463d914400-kube-api-access-vnt5n\") pod \"community-operators-cblzw\" (UID: \"48526cd8-976b-46b0-a73c-eb463d914400\") " pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.583857 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4ck92" event={"ID":"0e757aad-15bc-4e9b-950a-204c7cf9102c","Type":"ContainerDied","Data":"f9cd3940fb7f77779c00aca8a81715186844ea5d1261bddf665f4a1a91db1143"} Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.583907 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9cd3940fb7f77779c00aca8a81715186844ea5d1261bddf665f4a1a91db1143" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.584013 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4ck92" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.667719 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.718541 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.718708 4877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.751016 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.811340 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.811996 4877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.836473 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.893755 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qv9nh"] Dec 11 18:17:49 crc kubenswrapper[4877]: I1211 18:17:49.894049 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qv9nh" podUID="d76e4cfb-c4b9-464c-be7e-440efa73932e" containerName="registry-server" containerID="cri-o://735224bfbcc29fe38e9a9c8e7a99bbd8a893a7140bc0fe297ce45b5d0898cd60" gracePeriod=2 Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.131850 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6d4d95f94c-wnhwk"] Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.133186 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.137148 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.137273 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.137365 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4dcm7" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.137583 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.176152 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6d4d95f94c-wnhwk"] Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.246265 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-config-data\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.246323 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-fernet-keys\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.246421 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-combined-ca-bundle\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.246475 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-credential-keys\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.246500 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9nqf\" (UniqueName: \"kubernetes.io/projected/0767ff35-4ddd-4785-8538-3d65f777518d-kube-api-access-s9nqf\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.246536 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-scripts\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.348238 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-combined-ca-bundle\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.348336 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-credential-keys\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.348400 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9nqf\" (UniqueName: \"kubernetes.io/projected/0767ff35-4ddd-4785-8538-3d65f777518d-kube-api-access-s9nqf\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.348451 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-scripts\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.348522 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-config-data\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.348573 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-fernet-keys\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.356787 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-combined-ca-bundle\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.357414 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-config-data\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.357669 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-scripts\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.368477 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-fernet-keys\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.369025 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-credential-keys\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.369970 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9nqf\" (UniqueName: \"kubernetes.io/projected/0767ff35-4ddd-4785-8538-3d65f777518d-kube-api-access-s9nqf\") pod \"keystone-6d4d95f94c-wnhwk\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.460138 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.601396 4877 generic.go:334] "Generic (PLEG): container finished" podID="d76e4cfb-c4b9-464c-be7e-440efa73932e" containerID="735224bfbcc29fe38e9a9c8e7a99bbd8a893a7140bc0fe297ce45b5d0898cd60" exitCode=0 Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.601435 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qv9nh" event={"ID":"d76e4cfb-c4b9-464c-be7e-440efa73932e","Type":"ContainerDied","Data":"735224bfbcc29fe38e9a9c8e7a99bbd8a893a7140bc0fe297ce45b5d0898cd60"} Dec 11 18:17:50 crc kubenswrapper[4877]: I1211 18:17:50.988703 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4ck92"] Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.004747 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4ck92"] Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.085108 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-c7595"] Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.086780 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.092719 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.101147 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.136318 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-c7595"] Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.175302 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b78ffc5c-cmjhz"] Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.244926 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e757aad-15bc-4e9b-950a-204c7cf9102c" path="/var/lib/kubelet/pods/0e757aad-15bc-4e9b-950a-204c7cf9102c/volumes" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.268521 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-fernet-keys\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.268590 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-credential-keys\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.268629 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-scripts\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.268662 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-combined-ca-bundle\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.268717 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-config-data\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.268749 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q8rr\" (UniqueName: \"kubernetes.io/projected/d642ce6b-7f43-402d-9658-c824289a232c-kube-api-access-2q8rr\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.284907 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-79c649cbcc-97rb4"] Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.287852 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.368438 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.372619 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-config-data\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.372705 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q8rr\" (UniqueName: \"kubernetes.io/projected/d642ce6b-7f43-402d-9658-c824289a232c-kube-api-access-2q8rr\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.372743 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12291457-a2ba-4bfa-8c21-fdf315e8dc12-logs\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.372815 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-fernet-keys\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.372883 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-credential-keys\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.372904 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8km5\" (UniqueName: \"kubernetes.io/projected/12291457-a2ba-4bfa-8c21-fdf315e8dc12-kube-api-access-d8km5\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.372940 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12291457-a2ba-4bfa-8c21-fdf315e8dc12-scripts\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.372967 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/12291457-a2ba-4bfa-8c21-fdf315e8dc12-horizon-secret-key\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.372996 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-scripts\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.373026 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12291457-a2ba-4bfa-8c21-fdf315e8dc12-config-data\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.373066 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-combined-ca-bundle\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.381537 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79c649cbcc-97rb4"] Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.449623 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.488089 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8km5\" (UniqueName: \"kubernetes.io/projected/12291457-a2ba-4bfa-8c21-fdf315e8dc12-kube-api-access-d8km5\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.488194 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12291457-a2ba-4bfa-8c21-fdf315e8dc12-scripts\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.488229 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/12291457-a2ba-4bfa-8c21-fdf315e8dc12-horizon-secret-key\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.488270 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12291457-a2ba-4bfa-8c21-fdf315e8dc12-config-data\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.488408 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12291457-a2ba-4bfa-8c21-fdf315e8dc12-logs\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.517041 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12291457-a2ba-4bfa-8c21-fdf315e8dc12-scripts\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.517958 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12291457-a2ba-4bfa-8c21-fdf315e8dc12-logs\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.519326 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12291457-a2ba-4bfa-8c21-fdf315e8dc12-config-data\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.524141 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-combined-ca-bundle\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.525036 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8km5\" (UniqueName: \"kubernetes.io/projected/12291457-a2ba-4bfa-8c21-fdf315e8dc12-kube-api-access-d8km5\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.538446 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-fernet-keys\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.539833 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/12291457-a2ba-4bfa-8c21-fdf315e8dc12-horizon-secret-key\") pod \"horizon-79c649cbcc-97rb4\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.539850 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-credential-keys\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.539969 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-config-data\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.540438 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-scripts\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.541262 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q8rr\" (UniqueName: \"kubernetes.io/projected/d642ce6b-7f43-402d-9658-c824289a232c-kube-api-access-2q8rr\") pod \"keystone-bootstrap-c7595\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.614034 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.618057 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" containerName="glance-log" containerID="cri-o://6921e39eca0aacb0cecf266043df38b33ad27236c65708211e8368e36d62f77f" gracePeriod=30 Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.618276 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dfe20951-de12-4570-8cf0-f1c8a14e275d" containerName="glance-log" containerID="cri-o://d26a6b33c6fdf7e6a82dc4aa7dee9125c50384a74bbc59e7915228280721bfe3" gracePeriod=30 Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.618391 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" containerName="glance-httpd" containerID="cri-o://3dd06d10b774b28213245a9825e2ac4f371d11e2ec18cc1e60b76c6ff4034e6e" gracePeriod=30 Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.618423 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dfe20951-de12-4570-8cf0-f1c8a14e275d" containerName="glance-httpd" containerID="cri-o://7d6c8ecb0b458dade431b95c837178aa73f66be5786b31ef00bbfdca183d49b2" gracePeriod=30 Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.632864 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="dfe20951-de12-4570-8cf0-f1c8a14e275d" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.141:9292/healthcheck\": EOF" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.633058 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="dfe20951-de12-4570-8cf0-f1c8a14e275d" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.141:9292/healthcheck\": EOF" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.649603 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.142:9292/healthcheck\": EOF" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.656985 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.142:9292/healthcheck\": EOF" Dec 11 18:17:51 crc kubenswrapper[4877]: I1211 18:17:51.716289 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c7595" Dec 11 18:17:52 crc kubenswrapper[4877]: I1211 18:17:52.318874 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-k8mn8" podUID="50d772db-0b17-4c84-b0b9-29deb9a368f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Dec 11 18:17:52 crc kubenswrapper[4877]: I1211 18:17:52.629580 4877 generic.go:334] "Generic (PLEG): container finished" podID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" containerID="6921e39eca0aacb0cecf266043df38b33ad27236c65708211e8368e36d62f77f" exitCode=143 Dec 11 18:17:52 crc kubenswrapper[4877]: I1211 18:17:52.629658 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de1228cc-3f6d-49a9-b97a-7a0e79668c1a","Type":"ContainerDied","Data":"6921e39eca0aacb0cecf266043df38b33ad27236c65708211e8368e36d62f77f"} Dec 11 18:17:52 crc kubenswrapper[4877]: I1211 18:17:52.633814 4877 generic.go:334] "Generic (PLEG): container finished" podID="dfe20951-de12-4570-8cf0-f1c8a14e275d" containerID="d26a6b33c6fdf7e6a82dc4aa7dee9125c50384a74bbc59e7915228280721bfe3" exitCode=143 Dec 11 18:17:52 crc kubenswrapper[4877]: I1211 18:17:52.633859 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfe20951-de12-4570-8cf0-f1c8a14e275d","Type":"ContainerDied","Data":"d26a6b33c6fdf7e6a82dc4aa7dee9125c50384a74bbc59e7915228280721bfe3"} Dec 11 18:17:54 crc kubenswrapper[4877]: E1211 18:17:54.760983 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 11 18:17:54 crc kubenswrapper[4877]: E1211 18:17:54.761747 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67dh5c9h55fh5b7hc8hcfhf6h59fh74h5f8h65bh5f8h5c9h689h54ch696h64ch688h647hb9h66dh7ch597h5c5h587h5f7h8bh96h68fh5fch598hc8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z76pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7ff8bd457f-dgbx4_openstack(a74242f8-b432-44a9-8986-85e7dd6b20e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:17:54 crc kubenswrapper[4877]: E1211 18:17:54.764097 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7ff8bd457f-dgbx4" podUID="a74242f8-b432-44a9-8986-85e7dd6b20e8" Dec 11 18:17:55 crc kubenswrapper[4877]: E1211 18:17:55.671749 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7ff8bd457f-dgbx4" podUID="a74242f8-b432-44a9-8986-85e7dd6b20e8" Dec 11 18:17:56 crc kubenswrapper[4877]: I1211 18:17:56.680012 4877 generic.go:334] "Generic (PLEG): container finished" podID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" containerID="3dd06d10b774b28213245a9825e2ac4f371d11e2ec18cc1e60b76c6ff4034e6e" exitCode=0 Dec 11 18:17:56 crc kubenswrapper[4877]: I1211 18:17:56.680098 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de1228cc-3f6d-49a9-b97a-7a0e79668c1a","Type":"ContainerDied","Data":"3dd06d10b774b28213245a9825e2ac4f371d11e2ec18cc1e60b76c6ff4034e6e"} Dec 11 18:17:56 crc kubenswrapper[4877]: I1211 18:17:56.682900 4877 generic.go:334] "Generic (PLEG): container finished" podID="dfe20951-de12-4570-8cf0-f1c8a14e275d" containerID="7d6c8ecb0b458dade431b95c837178aa73f66be5786b31ef00bbfdca183d49b2" exitCode=0 Dec 11 18:17:56 crc kubenswrapper[4877]: I1211 18:17:56.682946 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfe20951-de12-4570-8cf0-f1c8a14e275d","Type":"ContainerDied","Data":"7d6c8ecb0b458dade431b95c837178aa73f66be5786b31ef00bbfdca183d49b2"} Dec 11 18:17:58 crc kubenswrapper[4877]: E1211 18:17:58.153103 4877 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 735224bfbcc29fe38e9a9c8e7a99bbd8a893a7140bc0fe297ce45b5d0898cd60 is running failed: container process not found" containerID="735224bfbcc29fe38e9a9c8e7a99bbd8a893a7140bc0fe297ce45b5d0898cd60" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 18:17:58 crc kubenswrapper[4877]: E1211 18:17:58.153674 4877 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 735224bfbcc29fe38e9a9c8e7a99bbd8a893a7140bc0fe297ce45b5d0898cd60 is running failed: container process not found" containerID="735224bfbcc29fe38e9a9c8e7a99bbd8a893a7140bc0fe297ce45b5d0898cd60" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 18:17:58 crc kubenswrapper[4877]: E1211 18:17:58.154337 4877 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 735224bfbcc29fe38e9a9c8e7a99bbd8a893a7140bc0fe297ce45b5d0898cd60 is running failed: container process not found" containerID="735224bfbcc29fe38e9a9c8e7a99bbd8a893a7140bc0fe297ce45b5d0898cd60" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 18:17:58 crc kubenswrapper[4877]: E1211 18:17:58.154406 4877 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 735224bfbcc29fe38e9a9c8e7a99bbd8a893a7140bc0fe297ce45b5d0898cd60 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-qv9nh" podUID="d76e4cfb-c4b9-464c-be7e-440efa73932e" containerName="registry-server" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.253553 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7ff8bd457f-dgbx4"] Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.320777 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5757846754-b6r7j"] Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.323945 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.329363 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.337206 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5757846754-b6r7j"] Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.372096 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctpsl\" (UniqueName: \"kubernetes.io/projected/7e767786-f0b1-4dae-b7c5-fd1e00046935-kube-api-access-ctpsl\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.372162 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-combined-ca-bundle\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.372197 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-horizon-secret-key\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.372245 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e767786-f0b1-4dae-b7c5-fd1e00046935-scripts\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.372282 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e767786-f0b1-4dae-b7c5-fd1e00046935-logs\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.372302 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-horizon-tls-certs\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.372327 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e767786-f0b1-4dae-b7c5-fd1e00046935-config-data\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.399005 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79c649cbcc-97rb4"] Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.439042 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c9dbfd97b-ck4jv"] Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.453522 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.457076 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c9dbfd97b-ck4jv"] Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.473828 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e767786-f0b1-4dae-b7c5-fd1e00046935-scripts\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.473886 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e767786-f0b1-4dae-b7c5-fd1e00046935-logs\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.473910 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-horizon-tls-certs\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.473932 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e767786-f0b1-4dae-b7c5-fd1e00046935-config-data\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.474018 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctpsl\" (UniqueName: \"kubernetes.io/projected/7e767786-f0b1-4dae-b7c5-fd1e00046935-kube-api-access-ctpsl\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.474051 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-combined-ca-bundle\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.474076 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-horizon-secret-key\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.474904 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e767786-f0b1-4dae-b7c5-fd1e00046935-scripts\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.475238 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e767786-f0b1-4dae-b7c5-fd1e00046935-logs\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.476530 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e767786-f0b1-4dae-b7c5-fd1e00046935-config-data\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.490400 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-horizon-secret-key\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.490528 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-horizon-tls-certs\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.514216 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-combined-ca-bundle\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.517955 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctpsl\" (UniqueName: \"kubernetes.io/projected/7e767786-f0b1-4dae-b7c5-fd1e00046935-kube-api-access-ctpsl\") pod \"horizon-5757846754-b6r7j\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.576195 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2afc51b6-dafc-47ce-875a-3a6249f69b47-combined-ca-bundle\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.576280 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2afc51b6-dafc-47ce-875a-3a6249f69b47-horizon-secret-key\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.576344 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2afc51b6-dafc-47ce-875a-3a6249f69b47-config-data\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.576427 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2afc51b6-dafc-47ce-875a-3a6249f69b47-logs\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.576479 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2afc51b6-dafc-47ce-875a-3a6249f69b47-horizon-tls-certs\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.576522 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2afc51b6-dafc-47ce-875a-3a6249f69b47-scripts\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.576545 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74vq8\" (UniqueName: \"kubernetes.io/projected/2afc51b6-dafc-47ce-875a-3a6249f69b47-kube-api-access-74vq8\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.662597 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.678917 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2afc51b6-dafc-47ce-875a-3a6249f69b47-logs\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.679003 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2afc51b6-dafc-47ce-875a-3a6249f69b47-horizon-tls-certs\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.679048 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2afc51b6-dafc-47ce-875a-3a6249f69b47-scripts\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.679071 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74vq8\" (UniqueName: \"kubernetes.io/projected/2afc51b6-dafc-47ce-875a-3a6249f69b47-kube-api-access-74vq8\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.679142 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2afc51b6-dafc-47ce-875a-3a6249f69b47-combined-ca-bundle\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.679175 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2afc51b6-dafc-47ce-875a-3a6249f69b47-horizon-secret-key\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.679207 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2afc51b6-dafc-47ce-875a-3a6249f69b47-config-data\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.680482 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2afc51b6-dafc-47ce-875a-3a6249f69b47-config-data\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.680748 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2afc51b6-dafc-47ce-875a-3a6249f69b47-logs\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.682478 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2afc51b6-dafc-47ce-875a-3a6249f69b47-scripts\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.685251 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2afc51b6-dafc-47ce-875a-3a6249f69b47-horizon-tls-certs\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.686770 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2afc51b6-dafc-47ce-875a-3a6249f69b47-horizon-secret-key\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.691310 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2afc51b6-dafc-47ce-875a-3a6249f69b47-combined-ca-bundle\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.701786 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74vq8\" (UniqueName: \"kubernetes.io/projected/2afc51b6-dafc-47ce-875a-3a6249f69b47-kube-api-access-74vq8\") pod \"horizon-c9dbfd97b-ck4jv\" (UID: \"2afc51b6-dafc-47ce-875a-3a6249f69b47\") " pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.757506 4877 generic.go:334] "Generic (PLEG): container finished" podID="8de39620-4351-442e-afb7-b53270fffe41" containerID="2932460f61b2adc9e522b23130551a011742d2ecaa94d84e6cccdbd01c7bcf75" exitCode=0 Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.757569 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x4kfs" event={"ID":"8de39620-4351-442e-afb7-b53270fffe41","Type":"ContainerDied","Data":"2932460f61b2adc9e522b23130551a011742d2ecaa94d84e6cccdbd01c7bcf75"} Dec 11 18:18:00 crc kubenswrapper[4877]: I1211 18:18:00.927015 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:02 crc kubenswrapper[4877]: I1211 18:18:02.317833 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-k8mn8" podUID="50d772db-0b17-4c84-b0b9-29deb9a368f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Dec 11 18:18:02 crc kubenswrapper[4877]: I1211 18:18:02.319201 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:18:05 crc kubenswrapper[4877]: E1211 18:18:05.428702 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 11 18:18:05 crc kubenswrapper[4877]: E1211 18:18:05.429854 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f4h5dbh65fh54fh656h5b4h65bh77h599h86h669h579hdbh7ch5f5h7dhc5h68bhcch64fh695h98hbdh55bh55fh595h6bhc7h645h5ffh5c8h4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njj6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(49332496-5e7e-426e-9d51-aee9479d8a0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:18:05 crc kubenswrapper[4877]: E1211 18:18:05.962204 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 11 18:18:05 crc kubenswrapper[4877]: E1211 18:18:05.962684 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkwvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-wgnnt_openstack(fee9614f-acc1-4883-989e-6348978f4641): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:18:05 crc kubenswrapper[4877]: E1211 18:18:05.964515 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-wgnnt" podUID="fee9614f-acc1-4883-989e-6348978f4641" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.069462 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.078486 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x4kfs" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.086263 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.204715 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a74242f8-b432-44a9-8986-85e7dd6b20e8-config-data\") pod \"a74242f8-b432-44a9-8986-85e7dd6b20e8\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.204777 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-dns-svc\") pod \"50d772db-0b17-4c84-b0b9-29deb9a368f2\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.204802 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-ovsdbserver-sb\") pod \"50d772db-0b17-4c84-b0b9-29deb9a368f2\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.204865 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljxgs\" (UniqueName: \"kubernetes.io/projected/50d772db-0b17-4c84-b0b9-29deb9a368f2-kube-api-access-ljxgs\") pod \"50d772db-0b17-4c84-b0b9-29deb9a368f2\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.204911 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z76pr\" (UniqueName: \"kubernetes.io/projected/a74242f8-b432-44a9-8986-85e7dd6b20e8-kube-api-access-z76pr\") pod \"a74242f8-b432-44a9-8986-85e7dd6b20e8\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.204972 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvghj\" (UniqueName: \"kubernetes.io/projected/8de39620-4351-442e-afb7-b53270fffe41-kube-api-access-mvghj\") pod \"8de39620-4351-442e-afb7-b53270fffe41\" (UID: \"8de39620-4351-442e-afb7-b53270fffe41\") " Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.204990 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-ovsdbserver-nb\") pod \"50d772db-0b17-4c84-b0b9-29deb9a368f2\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.205031 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a74242f8-b432-44a9-8986-85e7dd6b20e8-scripts\") pod \"a74242f8-b432-44a9-8986-85e7dd6b20e8\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.205061 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-config\") pod \"50d772db-0b17-4c84-b0b9-29deb9a368f2\" (UID: \"50d772db-0b17-4c84-b0b9-29deb9a368f2\") " Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.205095 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a74242f8-b432-44a9-8986-85e7dd6b20e8-horizon-secret-key\") pod \"a74242f8-b432-44a9-8986-85e7dd6b20e8\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.205113 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de39620-4351-442e-afb7-b53270fffe41-combined-ca-bundle\") pod \"8de39620-4351-442e-afb7-b53270fffe41\" (UID: \"8de39620-4351-442e-afb7-b53270fffe41\") " Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.205165 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a74242f8-b432-44a9-8986-85e7dd6b20e8-logs\") pod \"a74242f8-b432-44a9-8986-85e7dd6b20e8\" (UID: \"a74242f8-b432-44a9-8986-85e7dd6b20e8\") " Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.205188 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8de39620-4351-442e-afb7-b53270fffe41-config\") pod \"8de39620-4351-442e-afb7-b53270fffe41\" (UID: \"8de39620-4351-442e-afb7-b53270fffe41\") " Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.206334 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a74242f8-b432-44a9-8986-85e7dd6b20e8-scripts" (OuterVolumeSpecName: "scripts") pod "a74242f8-b432-44a9-8986-85e7dd6b20e8" (UID: "a74242f8-b432-44a9-8986-85e7dd6b20e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.206348 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a74242f8-b432-44a9-8986-85e7dd6b20e8-config-data" (OuterVolumeSpecName: "config-data") pod "a74242f8-b432-44a9-8986-85e7dd6b20e8" (UID: "a74242f8-b432-44a9-8986-85e7dd6b20e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.206876 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a74242f8-b432-44a9-8986-85e7dd6b20e8-logs" (OuterVolumeSpecName: "logs") pod "a74242f8-b432-44a9-8986-85e7dd6b20e8" (UID: "a74242f8-b432-44a9-8986-85e7dd6b20e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.225218 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50d772db-0b17-4c84-b0b9-29deb9a368f2-kube-api-access-ljxgs" (OuterVolumeSpecName: "kube-api-access-ljxgs") pod "50d772db-0b17-4c84-b0b9-29deb9a368f2" (UID: "50d772db-0b17-4c84-b0b9-29deb9a368f2"). InnerVolumeSpecName "kube-api-access-ljxgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.225506 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a74242f8-b432-44a9-8986-85e7dd6b20e8-kube-api-access-z76pr" (OuterVolumeSpecName: "kube-api-access-z76pr") pod "a74242f8-b432-44a9-8986-85e7dd6b20e8" (UID: "a74242f8-b432-44a9-8986-85e7dd6b20e8"). InnerVolumeSpecName "kube-api-access-z76pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.225552 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74242f8-b432-44a9-8986-85e7dd6b20e8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a74242f8-b432-44a9-8986-85e7dd6b20e8" (UID: "a74242f8-b432-44a9-8986-85e7dd6b20e8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.245240 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de39620-4351-442e-afb7-b53270fffe41-kube-api-access-mvghj" (OuterVolumeSpecName: "kube-api-access-mvghj") pod "8de39620-4351-442e-afb7-b53270fffe41" (UID: "8de39620-4351-442e-afb7-b53270fffe41"). InnerVolumeSpecName "kube-api-access-mvghj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.251648 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de39620-4351-442e-afb7-b53270fffe41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8de39620-4351-442e-afb7-b53270fffe41" (UID: "8de39620-4351-442e-afb7-b53270fffe41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.253114 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de39620-4351-442e-afb7-b53270fffe41-config" (OuterVolumeSpecName: "config") pod "8de39620-4351-442e-afb7-b53270fffe41" (UID: "8de39620-4351-442e-afb7-b53270fffe41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.270188 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-config" (OuterVolumeSpecName: "config") pod "50d772db-0b17-4c84-b0b9-29deb9a368f2" (UID: "50d772db-0b17-4c84-b0b9-29deb9a368f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.291057 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50d772db-0b17-4c84-b0b9-29deb9a368f2" (UID: "50d772db-0b17-4c84-b0b9-29deb9a368f2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.300066 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "50d772db-0b17-4c84-b0b9-29deb9a368f2" (UID: "50d772db-0b17-4c84-b0b9-29deb9a368f2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.300186 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "50d772db-0b17-4c84-b0b9-29deb9a368f2" (UID: "50d772db-0b17-4c84-b0b9-29deb9a368f2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.308012 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvghj\" (UniqueName: \"kubernetes.io/projected/8de39620-4351-442e-afb7-b53270fffe41-kube-api-access-mvghj\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.308035 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.308046 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a74242f8-b432-44a9-8986-85e7dd6b20e8-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.308056 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.308064 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de39620-4351-442e-afb7-b53270fffe41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.308073 4877 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a74242f8-b432-44a9-8986-85e7dd6b20e8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.308081 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a74242f8-b432-44a9-8986-85e7dd6b20e8-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.308090 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8de39620-4351-442e-afb7-b53270fffe41-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.308098 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a74242f8-b432-44a9-8986-85e7dd6b20e8-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.308107 4877 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.308116 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50d772db-0b17-4c84-b0b9-29deb9a368f2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.308124 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljxgs\" (UniqueName: \"kubernetes.io/projected/50d772db-0b17-4c84-b0b9-29deb9a368f2-kube-api-access-ljxgs\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.308132 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z76pr\" (UniqueName: \"kubernetes.io/projected/a74242f8-b432-44a9-8986-85e7dd6b20e8-kube-api-access-z76pr\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.829331 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ff8bd457f-dgbx4" event={"ID":"a74242f8-b432-44a9-8986-85e7dd6b20e8","Type":"ContainerDied","Data":"08da815009f40b4171d5c77df039c1cd2bb6b062d95f60261d75918228a0ea0f"} Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.829884 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ff8bd457f-dgbx4" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.837815 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-k8mn8" event={"ID":"50d772db-0b17-4c84-b0b9-29deb9a368f2","Type":"ContainerDied","Data":"df5e329152bd911572b61ef54fd501943cd359332cc52c97d7b23c65e60e625e"} Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.837868 4877 scope.go:117] "RemoveContainer" containerID="38e03a991b1e58fcd21737abead67fc68512c2e76c0adbd8d6305f5f91ef11d3" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.838042 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-k8mn8" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.840534 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x4kfs" event={"ID":"8de39620-4351-442e-afb7-b53270fffe41","Type":"ContainerDied","Data":"a10ef3555267d3f3d5cafb4a5f0688ab82b23b360ba20cbb703ccbed6f54531b"} Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.840566 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a10ef3555267d3f3d5cafb4a5f0688ab82b23b360ba20cbb703ccbed6f54531b" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.840617 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x4kfs" Dec 11 18:18:06 crc kubenswrapper[4877]: E1211 18:18:06.842606 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-wgnnt" podUID="fee9614f-acc1-4883-989e-6348978f4641" Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.917740 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7ff8bd457f-dgbx4"] Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.926312 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7ff8bd457f-dgbx4"] Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.936346 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-k8mn8"] Dec 11 18:18:06 crc kubenswrapper[4877]: I1211 18:18:06.947637 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-k8mn8"] Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.247762 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50d772db-0b17-4c84-b0b9-29deb9a368f2" path="/var/lib/kubelet/pods/50d772db-0b17-4c84-b0b9-29deb9a368f2/volumes" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.249355 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a74242f8-b432-44a9-8986-85e7dd6b20e8" path="/var/lib/kubelet/pods/a74242f8-b432-44a9-8986-85e7dd6b20e8/volumes" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.318969 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-k8mn8" podUID="50d772db-0b17-4c84-b0b9-29deb9a368f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Dec 11 18:18:07 crc kubenswrapper[4877]: E1211 18:18:07.353184 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 11 18:18:07 crc kubenswrapper[4877]: E1211 18:18:07.353490 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2z9l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-vtwqc_openstack(2cc9dafb-2cd8-4a57-b7f2-941c39748675): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:18:07 crc kubenswrapper[4877]: E1211 18:18:07.354722 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-vtwqc" podUID="2cc9dafb-2cd8-4a57-b7f2-941c39748675" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.470840 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66dd954c9d-qfqj2"] Dec 11 18:18:07 crc kubenswrapper[4877]: E1211 18:18:07.472256 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d772db-0b17-4c84-b0b9-29deb9a368f2" containerName="init" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.472272 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d772db-0b17-4c84-b0b9-29deb9a368f2" containerName="init" Dec 11 18:18:07 crc kubenswrapper[4877]: E1211 18:18:07.472313 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d772db-0b17-4c84-b0b9-29deb9a368f2" containerName="dnsmasq-dns" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.472321 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d772db-0b17-4c84-b0b9-29deb9a368f2" containerName="dnsmasq-dns" Dec 11 18:18:07 crc kubenswrapper[4877]: E1211 18:18:07.472337 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de39620-4351-442e-afb7-b53270fffe41" containerName="neutron-db-sync" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.472343 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de39620-4351-442e-afb7-b53270fffe41" containerName="neutron-db-sync" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.484017 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="8de39620-4351-442e-afb7-b53270fffe41" containerName="neutron-db-sync" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.484105 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="50d772db-0b17-4c84-b0b9-29deb9a368f2" containerName="dnsmasq-dns" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.493176 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.498865 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-b56vs" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.499187 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.500153 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.500339 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.504589 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66dd954c9d-qfqj2"] Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.569333 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-combined-ca-bundle\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.569444 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5slmn\" (UniqueName: \"kubernetes.io/projected/45886494-4c47-4ebd-8531-4895a7f7a2ed-kube-api-access-5slmn\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.577794 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-httpd-config\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.578389 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-config\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.578446 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-ovndb-tls-certs\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.591127 4877 scope.go:117] "RemoveContainer" containerID="3944239f64e39309944c139005f39764b5eabeb914d0d144dd8bb389487522cc" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.683637 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-config\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.683713 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-ovndb-tls-certs\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.683756 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-combined-ca-bundle\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.683823 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5slmn\" (UniqueName: \"kubernetes.io/projected/45886494-4c47-4ebd-8531-4895a7f7a2ed-kube-api-access-5slmn\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.683881 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-httpd-config\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.699579 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-combined-ca-bundle\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.715703 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-ovndb-tls-certs\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.716750 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-httpd-config\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.717174 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-config\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.719035 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5slmn\" (UniqueName: \"kubernetes.io/projected/45886494-4c47-4ebd-8531-4895a7f7a2ed-kube-api-access-5slmn\") pod \"neutron-66dd954c9d-qfqj2\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.806069 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.817209 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.889102 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfe20951-de12-4570-8cf0-f1c8a14e275d-httpd-run\") pod \"dfe20951-de12-4570-8cf0-f1c8a14e275d\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.889168 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76e4cfb-c4b9-464c-be7e-440efa73932e-utilities\") pod \"d76e4cfb-c4b9-464c-be7e-440efa73932e\" (UID: \"d76e4cfb-c4b9-464c-be7e-440efa73932e\") " Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.889196 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76e4cfb-c4b9-464c-be7e-440efa73932e-catalog-content\") pod \"d76e4cfb-c4b9-464c-be7e-440efa73932e\" (UID: \"d76e4cfb-c4b9-464c-be7e-440efa73932e\") " Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.889305 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-config-data\") pod \"dfe20951-de12-4570-8cf0-f1c8a14e275d\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.889333 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"dfe20951-de12-4570-8cf0-f1c8a14e275d\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.889427 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-scripts\") pod \"dfe20951-de12-4570-8cf0-f1c8a14e275d\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.889471 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-combined-ca-bundle\") pod \"dfe20951-de12-4570-8cf0-f1c8a14e275d\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.889589 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfe20951-de12-4570-8cf0-f1c8a14e275d-logs\") pod \"dfe20951-de12-4570-8cf0-f1c8a14e275d\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.889629 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqhkz\" (UniqueName: \"kubernetes.io/projected/dfe20951-de12-4570-8cf0-f1c8a14e275d-kube-api-access-pqhkz\") pod \"dfe20951-de12-4570-8cf0-f1c8a14e275d\" (UID: \"dfe20951-de12-4570-8cf0-f1c8a14e275d\") " Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.889656 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwl8c\" (UniqueName: \"kubernetes.io/projected/d76e4cfb-c4b9-464c-be7e-440efa73932e-kube-api-access-zwl8c\") pod \"d76e4cfb-c4b9-464c-be7e-440efa73932e\" (UID: \"d76e4cfb-c4b9-464c-be7e-440efa73932e\") " Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.889857 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe20951-de12-4570-8cf0-f1c8a14e275d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dfe20951-de12-4570-8cf0-f1c8a14e275d" (UID: "dfe20951-de12-4570-8cf0-f1c8a14e275d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.890083 4877 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfe20951-de12-4570-8cf0-f1c8a14e275d-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.891635 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe20951-de12-4570-8cf0-f1c8a14e275d-logs" (OuterVolumeSpecName: "logs") pod "dfe20951-de12-4570-8cf0-f1c8a14e275d" (UID: "dfe20951-de12-4570-8cf0-f1c8a14e275d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.895463 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-scripts" (OuterVolumeSpecName: "scripts") pod "dfe20951-de12-4570-8cf0-f1c8a14e275d" (UID: "dfe20951-de12-4570-8cf0-f1c8a14e275d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.897235 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76e4cfb-c4b9-464c-be7e-440efa73932e-kube-api-access-zwl8c" (OuterVolumeSpecName: "kube-api-access-zwl8c") pod "d76e4cfb-c4b9-464c-be7e-440efa73932e" (UID: "d76e4cfb-c4b9-464c-be7e-440efa73932e"). InnerVolumeSpecName "kube-api-access-zwl8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.899241 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d76e4cfb-c4b9-464c-be7e-440efa73932e-utilities" (OuterVolumeSpecName: "utilities") pod "d76e4cfb-c4b9-464c-be7e-440efa73932e" (UID: "d76e4cfb-c4b9-464c-be7e-440efa73932e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.900744 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de1228cc-3f6d-49a9-b97a-7a0e79668c1a","Type":"ContainerDied","Data":"97cbd867d6042bb9ab6143b7c08a20ce2cf12c2f96a67faaf436df2e3f8db3ef"} Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.901644 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97cbd867d6042bb9ab6143b7c08a20ce2cf12c2f96a67faaf436df2e3f8db3ef" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.902897 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "dfe20951-de12-4570-8cf0-f1c8a14e275d" (UID: "dfe20951-de12-4570-8cf0-f1c8a14e275d"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.906605 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfe20951-de12-4570-8cf0-f1c8a14e275d","Type":"ContainerDied","Data":"c2ad2ef72136d7042e395e21db095a207703be222d496192d1f0267dba02e7fe"} Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.906733 4877 scope.go:117] "RemoveContainer" containerID="7d6c8ecb0b458dade431b95c837178aa73f66be5786b31ef00bbfdca183d49b2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.906926 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.912997 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe20951-de12-4570-8cf0-f1c8a14e275d-kube-api-access-pqhkz" (OuterVolumeSpecName: "kube-api-access-pqhkz") pod "dfe20951-de12-4570-8cf0-f1c8a14e275d" (UID: "dfe20951-de12-4570-8cf0-f1c8a14e275d"). InnerVolumeSpecName "kube-api-access-pqhkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.922662 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qv9nh" event={"ID":"d76e4cfb-c4b9-464c-be7e-440efa73932e","Type":"ContainerDied","Data":"ca2dec24b3c6b700e2f0233e6654fac0a92c1aeedc61e86ddda64428cc507fa3"} Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.922850 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qv9nh" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.938091 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfe20951-de12-4570-8cf0-f1c8a14e275d" (UID: "dfe20951-de12-4570-8cf0-f1c8a14e275d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:07 crc kubenswrapper[4877]: E1211 18:18:07.939768 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-vtwqc" podUID="2cc9dafb-2cd8-4a57-b7f2-941c39748675" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.966436 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-config-data" (OuterVolumeSpecName: "config-data") pod "dfe20951-de12-4570-8cf0-f1c8a14e275d" (UID: "dfe20951-de12-4570-8cf0-f1c8a14e275d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.989553 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.993015 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d76e4cfb-c4b9-464c-be7e-440efa73932e-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.993049 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.993089 4877 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.993100 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.993110 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe20951-de12-4570-8cf0-f1c8a14e275d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.993134 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfe20951-de12-4570-8cf0-f1c8a14e275d-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.993142 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqhkz\" (UniqueName: \"kubernetes.io/projected/dfe20951-de12-4570-8cf0-f1c8a14e275d-kube-api-access-pqhkz\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:07 crc kubenswrapper[4877]: I1211 18:18:07.993152 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwl8c\" (UniqueName: \"kubernetes.io/projected/d76e4cfb-c4b9-464c-be7e-440efa73932e-kube-api-access-zwl8c\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.023288 4877 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.029934 4877 scope.go:117] "RemoveContainer" containerID="d26a6b33c6fdf7e6a82dc4aa7dee9125c50384a74bbc59e7915228280721bfe3" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.063964 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.092240 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d76e4cfb-c4b9-464c-be7e-440efa73932e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d76e4cfb-c4b9-464c-be7e-440efa73932e" (UID: "d76e4cfb-c4b9-464c-be7e-440efa73932e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.096449 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d76e4cfb-c4b9-464c-be7e-440efa73932e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.096491 4877 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.112399 4877 scope.go:117] "RemoveContainer" containerID="735224bfbcc29fe38e9a9c8e7a99bbd8a893a7140bc0fe297ce45b5d0898cd60" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.197797 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.197892 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-httpd-run\") pod \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.197939 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-logs\") pod \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.198199 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzzvc\" (UniqueName: \"kubernetes.io/projected/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-kube-api-access-qzzvc\") pod \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.198262 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-scripts\") pod \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.198394 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-config-data\") pod \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.198458 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-combined-ca-bundle\") pod \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\" (UID: \"de1228cc-3f6d-49a9-b97a-7a0e79668c1a\") " Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.199744 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "de1228cc-3f6d-49a9-b97a-7a0e79668c1a" (UID: "de1228cc-3f6d-49a9-b97a-7a0e79668c1a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.204675 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-logs" (OuterVolumeSpecName: "logs") pod "de1228cc-3f6d-49a9-b97a-7a0e79668c1a" (UID: "de1228cc-3f6d-49a9-b97a-7a0e79668c1a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.214005 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-scripts" (OuterVolumeSpecName: "scripts") pod "de1228cc-3f6d-49a9-b97a-7a0e79668c1a" (UID: "de1228cc-3f6d-49a9-b97a-7a0e79668c1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.214059 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "de1228cc-3f6d-49a9-b97a-7a0e79668c1a" (UID: "de1228cc-3f6d-49a9-b97a-7a0e79668c1a"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.214111 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-kube-api-access-qzzvc" (OuterVolumeSpecName: "kube-api-access-qzzvc") pod "de1228cc-3f6d-49a9-b97a-7a0e79668c1a" (UID: "de1228cc-3f6d-49a9-b97a-7a0e79668c1a"). InnerVolumeSpecName "kube-api-access-qzzvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.249584 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de1228cc-3f6d-49a9-b97a-7a0e79668c1a" (UID: "de1228cc-3f6d-49a9-b97a-7a0e79668c1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.303150 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzzvc\" (UniqueName: \"kubernetes.io/projected/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-kube-api-access-qzzvc\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.303185 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.303198 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.303233 4877 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.303242 4877 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.303255 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.307235 4877 scope.go:117] "RemoveContainer" containerID="dd9c5c05cda1b5efb1178a2889d49b35fd1281691cd016740b0e213f7c026ef0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.307349 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-config-data" (OuterVolumeSpecName: "config-data") pod "de1228cc-3f6d-49a9-b97a-7a0e79668c1a" (UID: "de1228cc-3f6d-49a9-b97a-7a0e79668c1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.335616 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.355328 4877 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.366894 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.401482 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qv9nh"] Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.406239 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1228cc-3f6d-49a9-b97a-7a0e79668c1a-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.406283 4877 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.434232 4877 scope.go:117] "RemoveContainer" containerID="04d2788df77d3a881d52bc17092ddd02065eff4de7f2545f2e93444ab46a5604" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.469929 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qv9nh"] Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.482453 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:18:08 crc kubenswrapper[4877]: E1211 18:18:08.482992 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" containerName="glance-log" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.483010 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" containerName="glance-log" Dec 11 18:18:08 crc kubenswrapper[4877]: E1211 18:18:08.483036 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe20951-de12-4570-8cf0-f1c8a14e275d" containerName="glance-httpd" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.483043 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe20951-de12-4570-8cf0-f1c8a14e275d" containerName="glance-httpd" Dec 11 18:18:08 crc kubenswrapper[4877]: E1211 18:18:08.483065 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76e4cfb-c4b9-464c-be7e-440efa73932e" containerName="extract-utilities" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.483075 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76e4cfb-c4b9-464c-be7e-440efa73932e" containerName="extract-utilities" Dec 11 18:18:08 crc kubenswrapper[4877]: E1211 18:18:08.483087 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76e4cfb-c4b9-464c-be7e-440efa73932e" containerName="extract-content" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.483094 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76e4cfb-c4b9-464c-be7e-440efa73932e" containerName="extract-content" Dec 11 18:18:08 crc kubenswrapper[4877]: E1211 18:18:08.483114 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe20951-de12-4570-8cf0-f1c8a14e275d" containerName="glance-log" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.483121 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe20951-de12-4570-8cf0-f1c8a14e275d" containerName="glance-log" Dec 11 18:18:08 crc kubenswrapper[4877]: E1211 18:18:08.483132 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" containerName="glance-httpd" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.483141 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" containerName="glance-httpd" Dec 11 18:18:08 crc kubenswrapper[4877]: E1211 18:18:08.483153 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76e4cfb-c4b9-464c-be7e-440efa73932e" containerName="registry-server" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.483159 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76e4cfb-c4b9-464c-be7e-440efa73932e" containerName="registry-server" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.483359 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfe20951-de12-4570-8cf0-f1c8a14e275d" containerName="glance-log" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.483392 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" containerName="glance-log" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.483408 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76e4cfb-c4b9-464c-be7e-440efa73932e" containerName="registry-server" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.483416 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" containerName="glance-httpd" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.483430 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfe20951-de12-4570-8cf0-f1c8a14e275d" containerName="glance-httpd" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.484636 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.488340 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.492015 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.494343 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.611539 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.612853 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.612908 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.613069 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/988ac866-4d7f-4417-9461-57187fe0ffb6-logs\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.613283 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/988ac866-4d7f-4417-9461-57187fe0ffb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.613361 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.616366 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmvgj\" (UniqueName: \"kubernetes.io/projected/988ac866-4d7f-4417-9461-57187fe0ffb6-kube-api-access-xmvgj\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.616513 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.619596 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-882nl"] Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.627025 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.666528 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-882nl"] Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.720801 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/988ac866-4d7f-4417-9461-57187fe0ffb6-logs\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.720851 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/988ac866-4d7f-4417-9461-57187fe0ffb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.720889 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.720922 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmvgj\" (UniqueName: \"kubernetes.io/projected/988ac866-4d7f-4417-9461-57187fe0ffb6-kube-api-access-xmvgj\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.720945 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.720989 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.721015 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhqcn\" (UniqueName: \"kubernetes.io/projected/c46cd811-f91f-48b8-aa23-f916227e65d8-kube-api-access-fhqcn\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.721042 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.721091 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.721125 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-config\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.721148 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.721186 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.721215 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-dns-svc\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.721236 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.721912 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/988ac866-4d7f-4417-9461-57187fe0ffb6-logs\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.722146 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/988ac866-4d7f-4417-9461-57187fe0ffb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.725340 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.738120 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.744256 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.744388 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.750056 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.771357 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmvgj\" (UniqueName: \"kubernetes.io/projected/988ac866-4d7f-4417-9461-57187fe0ffb6-kube-api-access-xmvgj\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.825409 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.825458 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhqcn\" (UniqueName: \"kubernetes.io/projected/c46cd811-f91f-48b8-aa23-f916227e65d8-kube-api-access-fhqcn\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.825546 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-config\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.825600 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.825633 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.825652 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-dns-svc\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.827022 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.828093 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-config\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.833662 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.835723 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-dns-svc\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.855099 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " pod="openstack/glance-default-external-api-0" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.863093 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhqcn\" (UniqueName: \"kubernetes.io/projected/c46cd811-f91f-48b8-aa23-f916227e65d8-kube-api-access-fhqcn\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:08 crc kubenswrapper[4877]: I1211 18:18:08.863878 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-882nl\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.038423 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bcdfz" event={"ID":"73110039-1660-4b03-9f07-2469ea7fe039","Type":"ContainerStarted","Data":"8070fd8f8c546372b9b65836b0182e287538b17f94b2a4c59a036fbf5bad787f"} Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.064022 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b78ffc5c-cmjhz" event={"ID":"ef4bb395-d817-4cf1-a7b9-692cf1831b79","Type":"ContainerStarted","Data":"983cc5afc116fbe82d19b2e4bc15438ca9aac2cdbd0769e5d1319a3b34437c8f"} Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.070092 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.073641 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.075941 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.194720 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-bcdfz" podStartSLOduration=5.98195346 podStartE2EDuration="35.194678168s" podCreationTimestamp="2025-12-11 18:17:34 +0000 UTC" firstStartedPulling="2025-12-11 18:17:36.737223416 +0000 UTC m=+1017.763467460" lastFinishedPulling="2025-12-11 18:18:05.949948124 +0000 UTC m=+1046.976192168" observedRunningTime="2025-12-11 18:18:09.087221268 +0000 UTC m=+1050.113465312" watchObservedRunningTime="2025-12-11 18:18:09.194678168 +0000 UTC m=+1050.220922212" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.196938 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.208857 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.215514 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cblzw"] Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.246812 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d76e4cfb-c4b9-464c-be7e-440efa73932e" path="/var/lib/kubelet/pods/d76e4cfb-c4b9-464c-be7e-440efa73932e/volumes" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.247779 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" path="/var/lib/kubelet/pods/de1228cc-3f6d-49a9-b97a-7a0e79668c1a/volumes" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.249690 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe20951-de12-4570-8cf0-f1c8a14e275d" path="/var/lib/kubelet/pods/dfe20951-de12-4570-8cf0-f1c8a14e275d/volumes" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.252641 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.254556 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.256065 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6d4d95f94c-wnhwk"] Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.260911 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.261322 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.279044 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.292223 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-c7595"] Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.301776 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c9dbfd97b-ck4jv"] Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.312609 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79c649cbcc-97rb4"] Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.373593 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5757846754-b6r7j"] Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.387408 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.387501 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.387537 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdp6p\" (UniqueName: \"kubernetes.io/projected/d38a8876-fdda-4682-938f-bb74481adf46-kube-api-access-fdp6p\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.387600 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d38a8876-fdda-4682-938f-bb74481adf46-logs\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.387681 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.387738 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.387754 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d38a8876-fdda-4682-938f-bb74481adf46-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.387796 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.434300 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66dd954c9d-qfqj2"] Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.489723 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d38a8876-fdda-4682-938f-bb74481adf46-logs\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.489784 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.489847 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.489868 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d38a8876-fdda-4682-938f-bb74481adf46-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.489932 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.490006 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.490043 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.490077 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdp6p\" (UniqueName: \"kubernetes.io/projected/d38a8876-fdda-4682-938f-bb74481adf46-kube-api-access-fdp6p\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.490646 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d38a8876-fdda-4682-938f-bb74481adf46-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.490669 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.490945 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d38a8876-fdda-4682-938f-bb74481adf46-logs\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.495432 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.495627 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.496233 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.499013 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.512511 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdp6p\" (UniqueName: \"kubernetes.io/projected/d38a8876-fdda-4682-938f-bb74481adf46-kube-api-access-fdp6p\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.534166 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:18:09 crc kubenswrapper[4877]: I1211 18:18:09.583178 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 18:18:10 crc kubenswrapper[4877]: I1211 18:18:10.072659 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b78ffc5c-cmjhz" podUID="ef4bb395-d817-4cf1-a7b9-692cf1831b79" containerName="horizon-log" containerID="cri-o://983cc5afc116fbe82d19b2e4bc15438ca9aac2cdbd0769e5d1319a3b34437c8f" gracePeriod=30 Dec 11 18:18:10 crc kubenswrapper[4877]: I1211 18:18:10.073244 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b78ffc5c-cmjhz" event={"ID":"ef4bb395-d817-4cf1-a7b9-692cf1831b79","Type":"ContainerStarted","Data":"d13cee107d6d9af0059b0e8acbbf4a909d66620c1f749c2efe338829041bd046"} Dec 11 18:18:10 crc kubenswrapper[4877]: I1211 18:18:10.073306 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b78ffc5c-cmjhz" podUID="ef4bb395-d817-4cf1-a7b9-692cf1831b79" containerName="horizon" containerID="cri-o://d13cee107d6d9af0059b0e8acbbf4a909d66620c1f749c2efe338829041bd046" gracePeriod=30 Dec 11 18:18:10 crc kubenswrapper[4877]: I1211 18:18:10.107237 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6b78ffc5c-cmjhz" podStartSLOduration=7.0839391 podStartE2EDuration="36.107208075s" podCreationTimestamp="2025-12-11 18:17:34 +0000 UTC" firstStartedPulling="2025-12-11 18:17:36.96480357 +0000 UTC m=+1017.991047614" lastFinishedPulling="2025-12-11 18:18:05.988072545 +0000 UTC m=+1047.014316589" observedRunningTime="2025-12-11 18:18:10.101055402 +0000 UTC m=+1051.127299446" watchObservedRunningTime="2025-12-11 18:18:10.107208075 +0000 UTC m=+1051.133452119" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.332458 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b4dd6dd9-6mv2v"] Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.334850 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.338516 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.339826 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.340786 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b4dd6dd9-6mv2v"] Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.430019 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-ovndb-tls-certs\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.430090 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-public-tls-certs\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.430137 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-config\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.430239 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj4q2\" (UniqueName: \"kubernetes.io/projected/4808e7d5-7e53-4b59-a46c-86838df224c0-kube-api-access-xj4q2\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.430728 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-combined-ca-bundle\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.430917 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-httpd-config\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.430989 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-internal-tls-certs\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.533417 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-public-tls-certs\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.533491 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-config\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.533521 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj4q2\" (UniqueName: \"kubernetes.io/projected/4808e7d5-7e53-4b59-a46c-86838df224c0-kube-api-access-xj4q2\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.533629 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-combined-ca-bundle\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.533687 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-httpd-config\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.533759 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-internal-tls-certs\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.534087 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-ovndb-tls-certs\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.545100 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-ovndb-tls-certs\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.546662 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-public-tls-certs\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.547239 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-combined-ca-bundle\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.547513 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-internal-tls-certs\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.547582 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-config\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.549883 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4808e7d5-7e53-4b59-a46c-86838df224c0-httpd-config\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.556521 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj4q2\" (UniqueName: \"kubernetes.io/projected/4808e7d5-7e53-4b59-a46c-86838df224c0-kube-api-access-xj4q2\") pod \"neutron-7b4dd6dd9-6mv2v\" (UID: \"4808e7d5-7e53-4b59-a46c-86838df224c0\") " pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:11 crc kubenswrapper[4877]: I1211 18:18:11.656364 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:12 crc kubenswrapper[4877]: W1211 18:18:12.875997 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12291457_a2ba_4bfa_8c21_fdf315e8dc12.slice/crio-b7230d34d29cf4f72ee46b1eb9f4ddcfee64dad7bcd6a03b61b003985f2f678c WatchSource:0}: Error finding container b7230d34d29cf4f72ee46b1eb9f4ddcfee64dad7bcd6a03b61b003985f2f678c: Status 404 returned error can't find the container with id b7230d34d29cf4f72ee46b1eb9f4ddcfee64dad7bcd6a03b61b003985f2f678c Dec 11 18:18:12 crc kubenswrapper[4877]: W1211 18:18:12.881875 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd642ce6b_7f43_402d_9658_c824289a232c.slice/crio-b2dde2a28378ae908b7d4318c20cacbe8cc1dc542a5bea1f5a23bb25b7d0d564 WatchSource:0}: Error finding container b2dde2a28378ae908b7d4318c20cacbe8cc1dc542a5bea1f5a23bb25b7d0d564: Status 404 returned error can't find the container with id b2dde2a28378ae908b7d4318c20cacbe8cc1dc542a5bea1f5a23bb25b7d0d564 Dec 11 18:18:12 crc kubenswrapper[4877]: W1211 18:18:12.913516 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e767786_f0b1_4dae_b7c5_fd1e00046935.slice/crio-a1772923300b0253d85948074064f66593bae2fb2e333c2158ad6228088def69 WatchSource:0}: Error finding container a1772923300b0253d85948074064f66593bae2fb2e333c2158ad6228088def69: Status 404 returned error can't find the container with id a1772923300b0253d85948074064f66593bae2fb2e333c2158ad6228088def69 Dec 11 18:18:13 crc kubenswrapper[4877]: I1211 18:18:13.109411 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c7595" event={"ID":"d642ce6b-7f43-402d-9658-c824289a232c","Type":"ContainerStarted","Data":"b2dde2a28378ae908b7d4318c20cacbe8cc1dc542a5bea1f5a23bb25b7d0d564"} Dec 11 18:18:13 crc kubenswrapper[4877]: I1211 18:18:13.115051 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c9dbfd97b-ck4jv" event={"ID":"2afc51b6-dafc-47ce-875a-3a6249f69b47","Type":"ContainerStarted","Data":"1c1f78deed3b8595689b94ee324a6fb1fdd3f91e87d06e662192fb5e171adc6e"} Dec 11 18:18:13 crc kubenswrapper[4877]: I1211 18:18:13.116945 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cblzw" event={"ID":"48526cd8-976b-46b0-a73c-eb463d914400","Type":"ContainerStarted","Data":"997e954834e59c67ef8ea77682378e963da89d2a4e6156efa4811cdec48550da"} Dec 11 18:18:13 crc kubenswrapper[4877]: I1211 18:18:13.118439 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66dd954c9d-qfqj2" event={"ID":"45886494-4c47-4ebd-8531-4895a7f7a2ed","Type":"ContainerStarted","Data":"de000a49d03bffca2ee444ec205bd64db2011bdcf2f1f1d24f2d5ddb9b130a99"} Dec 11 18:18:13 crc kubenswrapper[4877]: I1211 18:18:13.127346 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6d4d95f94c-wnhwk" event={"ID":"0767ff35-4ddd-4785-8538-3d65f777518d","Type":"ContainerStarted","Data":"b5d4c2de6d9dbe752932c217c4effd881a5033371ca8ccf08b93998e0e60739b"} Dec 11 18:18:13 crc kubenswrapper[4877]: I1211 18:18:13.130211 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79c649cbcc-97rb4" event={"ID":"12291457-a2ba-4bfa-8c21-fdf315e8dc12","Type":"ContainerStarted","Data":"b7230d34d29cf4f72ee46b1eb9f4ddcfee64dad7bcd6a03b61b003985f2f678c"} Dec 11 18:18:13 crc kubenswrapper[4877]: I1211 18:18:13.133344 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5757846754-b6r7j" event={"ID":"7e767786-f0b1-4dae-b7c5-fd1e00046935","Type":"ContainerStarted","Data":"a1772923300b0253d85948074064f66593bae2fb2e333c2158ad6228088def69"} Dec 11 18:18:13 crc kubenswrapper[4877]: I1211 18:18:13.479449 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-882nl"] Dec 11 18:18:13 crc kubenswrapper[4877]: W1211 18:18:13.483313 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc46cd811_f91f_48b8_aa23_f916227e65d8.slice/crio-4ff50c4fbcdd5f85a37974dd5d8eefc1015f88d2ea1403c13a9fe5385ec0b50b WatchSource:0}: Error finding container 4ff50c4fbcdd5f85a37974dd5d8eefc1015f88d2ea1403c13a9fe5385ec0b50b: Status 404 returned error can't find the container with id 4ff50c4fbcdd5f85a37974dd5d8eefc1015f88d2ea1403c13a9fe5385ec0b50b Dec 11 18:18:13 crc kubenswrapper[4877]: W1211 18:18:13.721219 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4808e7d5_7e53_4b59_a46c_86838df224c0.slice/crio-a6de3d8d1555979d3b98648692c0df054146c4a5ee07b395e707962acb584e1e WatchSource:0}: Error finding container a6de3d8d1555979d3b98648692c0df054146c4a5ee07b395e707962acb584e1e: Status 404 returned error can't find the container with id a6de3d8d1555979d3b98648692c0df054146c4a5ee07b395e707962acb584e1e Dec 11 18:18:13 crc kubenswrapper[4877]: I1211 18:18:13.721677 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b4dd6dd9-6mv2v"] Dec 11 18:18:13 crc kubenswrapper[4877]: I1211 18:18:13.825392 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:18:13 crc kubenswrapper[4877]: W1211 18:18:13.836261 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod988ac866_4d7f_4417_9461_57187fe0ffb6.slice/crio-eb122de0e8f917e5a4e2925f94a42f19d671d5ce51e4d60e55bff389a3858462 WatchSource:0}: Error finding container eb122de0e8f917e5a4e2925f94a42f19d671d5ce51e4d60e55bff389a3858462: Status 404 returned error can't find the container with id eb122de0e8f917e5a4e2925f94a42f19d671d5ce51e4d60e55bff389a3858462 Dec 11 18:18:14 crc kubenswrapper[4877]: I1211 18:18:14.001739 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:18:14 crc kubenswrapper[4877]: W1211 18:18:14.033116 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd38a8876_fdda_4682_938f_bb74481adf46.slice/crio-f64980c6c0f33897cde471ac21d4405fd40fabdd612b894bedd431fc961e04f5 WatchSource:0}: Error finding container f64980c6c0f33897cde471ac21d4405fd40fabdd612b894bedd431fc961e04f5: Status 404 returned error can't find the container with id f64980c6c0f33897cde471ac21d4405fd40fabdd612b894bedd431fc961e04f5 Dec 11 18:18:14 crc kubenswrapper[4877]: I1211 18:18:14.148199 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4dd6dd9-6mv2v" event={"ID":"4808e7d5-7e53-4b59-a46c-86838df224c0","Type":"ContainerStarted","Data":"a6de3d8d1555979d3b98648692c0df054146c4a5ee07b395e707962acb584e1e"} Dec 11 18:18:14 crc kubenswrapper[4877]: I1211 18:18:14.151528 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d38a8876-fdda-4682-938f-bb74481adf46","Type":"ContainerStarted","Data":"f64980c6c0f33897cde471ac21d4405fd40fabdd612b894bedd431fc961e04f5"} Dec 11 18:18:14 crc kubenswrapper[4877]: I1211 18:18:14.154860 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"988ac866-4d7f-4417-9461-57187fe0ffb6","Type":"ContainerStarted","Data":"eb122de0e8f917e5a4e2925f94a42f19d671d5ce51e4d60e55bff389a3858462"} Dec 11 18:18:14 crc kubenswrapper[4877]: I1211 18:18:14.165303 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-882nl" event={"ID":"c46cd811-f91f-48b8-aa23-f916227e65d8","Type":"ContainerStarted","Data":"4ff50c4fbcdd5f85a37974dd5d8eefc1015f88d2ea1403c13a9fe5385ec0b50b"} Dec 11 18:18:14 crc kubenswrapper[4877]: I1211 18:18:14.172293 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cblzw" event={"ID":"48526cd8-976b-46b0-a73c-eb463d914400","Type":"ContainerStarted","Data":"38355607cad6edd868b6d91f71e2f18f8997b270ff553d36cb199e2d11ca342e"} Dec 11 18:18:14 crc kubenswrapper[4877]: I1211 18:18:14.204705 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66dd954c9d-qfqj2" event={"ID":"45886494-4c47-4ebd-8531-4895a7f7a2ed","Type":"ContainerStarted","Data":"a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.244454 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d38a8876-fdda-4682-938f-bb74481adf46","Type":"ContainerStarted","Data":"75a63f7fbbca5ebe1e16d9116c17598de01ed362da13bf36f9be0d50674c126d"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.254578 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"988ac866-4d7f-4417-9461-57187fe0ffb6","Type":"ContainerStarted","Data":"bd82ae6209ba4077a48213d4f4acb433cbf0d036a0c238bd62c8544955f70c8c"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.258260 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79c649cbcc-97rb4" event={"ID":"12291457-a2ba-4bfa-8c21-fdf315e8dc12","Type":"ContainerStarted","Data":"cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.258293 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79c649cbcc-97rb4" event={"ID":"12291457-a2ba-4bfa-8c21-fdf315e8dc12","Type":"ContainerStarted","Data":"a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.258465 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-79c649cbcc-97rb4" podUID="12291457-a2ba-4bfa-8c21-fdf315e8dc12" containerName="horizon-log" containerID="cri-o://a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f" gracePeriod=30 Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.260448 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-79c649cbcc-97rb4" podUID="12291457-a2ba-4bfa-8c21-fdf315e8dc12" containerName="horizon" containerID="cri-o://cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355" gracePeriod=30 Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.323147 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c9dbfd97b-ck4jv" event={"ID":"2afc51b6-dafc-47ce-875a-3a6249f69b47","Type":"ContainerStarted","Data":"3db9b0e0238596ddb37c44f14fe35685299674185ddb805bccb7b003361c4906"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.323209 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c9dbfd97b-ck4jv" event={"ID":"2afc51b6-dafc-47ce-875a-3a6249f69b47","Type":"ContainerStarted","Data":"df5bca03cac6c0c31a664c981975f6eeb96e0edc293385f09deadc249d453fef"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.341537 4877 generic.go:334] "Generic (PLEG): container finished" podID="48526cd8-976b-46b0-a73c-eb463d914400" containerID="38355607cad6edd868b6d91f71e2f18f8997b270ff553d36cb199e2d11ca342e" exitCode=0 Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.341629 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cblzw" event={"ID":"48526cd8-976b-46b0-a73c-eb463d914400","Type":"ContainerDied","Data":"38355607cad6edd868b6d91f71e2f18f8997b270ff553d36cb199e2d11ca342e"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.355290 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-79c649cbcc-97rb4" podStartSLOduration=24.355265892 podStartE2EDuration="24.355265892s" podCreationTimestamp="2025-12-11 18:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:15.311991894 +0000 UTC m=+1056.338235938" watchObservedRunningTime="2025-12-11 18:18:15.355265892 +0000 UTC m=+1056.381509936" Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.361561 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6d4d95f94c-wnhwk" event={"ID":"0767ff35-4ddd-4785-8538-3d65f777518d","Type":"ContainerStarted","Data":"6486cc4af38d96fd26c04f8c4f80f944a889e7b0a2f0201d0d124cd314f67ddd"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.362532 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.364362 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c9dbfd97b-ck4jv" podStartSLOduration=15.364340063 podStartE2EDuration="15.364340063s" podCreationTimestamp="2025-12-11 18:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:15.353003322 +0000 UTC m=+1056.379247366" watchObservedRunningTime="2025-12-11 18:18:15.364340063 +0000 UTC m=+1056.390584107" Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.369630 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4dd6dd9-6mv2v" event={"ID":"4808e7d5-7e53-4b59-a46c-86838df224c0","Type":"ContainerStarted","Data":"9f86b7e1fec40fd9776712bc3ca13c419dcd8de3c482f67159c11945d26e6da7"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.369675 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4dd6dd9-6mv2v" event={"ID":"4808e7d5-7e53-4b59-a46c-86838df224c0","Type":"ContainerStarted","Data":"32bddee0ccab1a0fc9c0b1e73a1ea786410d1557effcff41b6aa9821952504ce"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.369830 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.390109 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49332496-5e7e-426e-9d51-aee9479d8a0d","Type":"ContainerStarted","Data":"b3cabce23b6405774e75385d93ba30725ee92ec251ed7625eb1fc231354a651d"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.395724 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5757846754-b6r7j" event={"ID":"7e767786-f0b1-4dae-b7c5-fd1e00046935","Type":"ContainerStarted","Data":"8e764b5cee6f743b749ace4261633961a405255c20519fae3935d10c8dd58b51"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.395769 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5757846754-b6r7j" event={"ID":"7e767786-f0b1-4dae-b7c5-fd1e00046935","Type":"ContainerStarted","Data":"0cd4e0aa09f476a21016a9718fd1e71baff6f63888026e4059bb942c93702405"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.413794 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6d4d95f94c-wnhwk" podStartSLOduration=25.413773444 podStartE2EDuration="25.413773444s" podCreationTimestamp="2025-12-11 18:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:15.411827442 +0000 UTC m=+1056.438071486" watchObservedRunningTime="2025-12-11 18:18:15.413773444 +0000 UTC m=+1056.440017498" Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.419166 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c7595" event={"ID":"d642ce6b-7f43-402d-9658-c824289a232c","Type":"ContainerStarted","Data":"161e5fc70c65ed69d826316c94e9a492f621cdcd40f85194f164b2de2916017d"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.455674 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b4dd6dd9-6mv2v" podStartSLOduration=4.455652375 podStartE2EDuration="4.455652375s" podCreationTimestamp="2025-12-11 18:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:15.442149807 +0000 UTC m=+1056.468393851" watchObservedRunningTime="2025-12-11 18:18:15.455652375 +0000 UTC m=+1056.481896419" Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.456384 4877 generic.go:334] "Generic (PLEG): container finished" podID="c46cd811-f91f-48b8-aa23-f916227e65d8" containerID="93827d1982838564e062945b1db65ec3a05644f63b3d418eee0db12ed18199d6" exitCode=0 Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.456517 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-882nl" event={"ID":"c46cd811-f91f-48b8-aa23-f916227e65d8","Type":"ContainerDied","Data":"93827d1982838564e062945b1db65ec3a05644f63b3d418eee0db12ed18199d6"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.473580 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66dd954c9d-qfqj2" event={"ID":"45886494-4c47-4ebd-8531-4895a7f7a2ed","Type":"ContainerStarted","Data":"c29e682b5666dff5e2d8b0797fc619b82551dce679dfd0c767481fc702e2908a"} Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.473958 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-c7595" podStartSLOduration=24.47393212 podStartE2EDuration="24.47393212s" podCreationTimestamp="2025-12-11 18:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:15.469530213 +0000 UTC m=+1056.495774267" watchObservedRunningTime="2025-12-11 18:18:15.47393212 +0000 UTC m=+1056.500176154" Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.474644 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.516050 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5757846754-b6r7j" podStartSLOduration=15.516027416 podStartE2EDuration="15.516027416s" podCreationTimestamp="2025-12-11 18:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:15.508309202 +0000 UTC m=+1056.534553266" watchObservedRunningTime="2025-12-11 18:18:15.516027416 +0000 UTC m=+1056.542271460" Dec 11 18:18:15 crc kubenswrapper[4877]: I1211 18:18:15.734621 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:18:16 crc kubenswrapper[4877]: I1211 18:18:16.496548 4877 generic.go:334] "Generic (PLEG): container finished" podID="73110039-1660-4b03-9f07-2469ea7fe039" containerID="8070fd8f8c546372b9b65836b0182e287538b17f94b2a4c59a036fbf5bad787f" exitCode=0 Dec 11 18:18:16 crc kubenswrapper[4877]: I1211 18:18:16.496654 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bcdfz" event={"ID":"73110039-1660-4b03-9f07-2469ea7fe039","Type":"ContainerDied","Data":"8070fd8f8c546372b9b65836b0182e287538b17f94b2a4c59a036fbf5bad787f"} Dec 11 18:18:16 crc kubenswrapper[4877]: I1211 18:18:16.527523 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66dd954c9d-qfqj2" podStartSLOduration=9.527493768 podStartE2EDuration="9.527493768s" podCreationTimestamp="2025-12-11 18:18:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:15.566025653 +0000 UTC m=+1056.592269697" watchObservedRunningTime="2025-12-11 18:18:16.527493768 +0000 UTC m=+1057.553737812" Dec 11 18:18:17 crc kubenswrapper[4877]: I1211 18:18:17.992493 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bcdfz" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.082042 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-scripts\") pod \"73110039-1660-4b03-9f07-2469ea7fe039\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.082110 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xczd6\" (UniqueName: \"kubernetes.io/projected/73110039-1660-4b03-9f07-2469ea7fe039-kube-api-access-xczd6\") pod \"73110039-1660-4b03-9f07-2469ea7fe039\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.082203 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73110039-1660-4b03-9f07-2469ea7fe039-logs\") pod \"73110039-1660-4b03-9f07-2469ea7fe039\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.082267 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-combined-ca-bundle\") pod \"73110039-1660-4b03-9f07-2469ea7fe039\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.082413 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-config-data\") pod \"73110039-1660-4b03-9f07-2469ea7fe039\" (UID: \"73110039-1660-4b03-9f07-2469ea7fe039\") " Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.087691 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73110039-1660-4b03-9f07-2469ea7fe039-logs" (OuterVolumeSpecName: "logs") pod "73110039-1660-4b03-9f07-2469ea7fe039" (UID: "73110039-1660-4b03-9f07-2469ea7fe039"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.091527 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73110039-1660-4b03-9f07-2469ea7fe039-kube-api-access-xczd6" (OuterVolumeSpecName: "kube-api-access-xczd6") pod "73110039-1660-4b03-9f07-2469ea7fe039" (UID: "73110039-1660-4b03-9f07-2469ea7fe039"). InnerVolumeSpecName "kube-api-access-xczd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.099031 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-scripts" (OuterVolumeSpecName: "scripts") pod "73110039-1660-4b03-9f07-2469ea7fe039" (UID: "73110039-1660-4b03-9f07-2469ea7fe039"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.116408 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73110039-1660-4b03-9f07-2469ea7fe039" (UID: "73110039-1660-4b03-9f07-2469ea7fe039"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.139528 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-config-data" (OuterVolumeSpecName: "config-data") pod "73110039-1660-4b03-9f07-2469ea7fe039" (UID: "73110039-1660-4b03-9f07-2469ea7fe039"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.185399 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73110039-1660-4b03-9f07-2469ea7fe039-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.185438 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.185449 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.185458 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73110039-1660-4b03-9f07-2469ea7fe039-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.185470 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xczd6\" (UniqueName: \"kubernetes.io/projected/73110039-1660-4b03-9f07-2469ea7fe039-kube-api-access-xczd6\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.554800 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"988ac866-4d7f-4417-9461-57187fe0ffb6","Type":"ContainerStarted","Data":"bd70b0974b290e0f8bc491b9e182f6a1f8781e5b16dd57abac3619a74deb3543"} Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.579303 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.579276366 podStartE2EDuration="10.579276366s" podCreationTimestamp="2025-12-11 18:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:18.573624656 +0000 UTC m=+1059.599868700" watchObservedRunningTime="2025-12-11 18:18:18.579276366 +0000 UTC m=+1059.605520410" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.584619 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-882nl" event={"ID":"c46cd811-f91f-48b8-aa23-f916227e65d8","Type":"ContainerStarted","Data":"8dea6a1cfd9a949838411a0675a299bada7c60c3d4f2123d4bbc1e8f2ddcabb7"} Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.585276 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.600862 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bcdfz" event={"ID":"73110039-1660-4b03-9f07-2469ea7fe039","Type":"ContainerDied","Data":"fa13356af6b8de8a25e5e4865eb097e20a6c8da2a1bc7f7d4f5d4ce40e847389"} Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.600916 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa13356af6b8de8a25e5e4865eb097e20a6c8da2a1bc7f7d4f5d4ce40e847389" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.601011 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bcdfz" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.619117 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-882nl" podStartSLOduration=10.619093213 podStartE2EDuration="10.619093213s" podCreationTimestamp="2025-12-11 18:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:18.608480301 +0000 UTC m=+1059.634724345" watchObservedRunningTime="2025-12-11 18:18:18.619093213 +0000 UTC m=+1059.645337257" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.619689 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d38a8876-fdda-4682-938f-bb74481adf46","Type":"ContainerStarted","Data":"d979f5d4390e9375dce75c838a421ac8bf08748ed047faa4e573de09b7016298"} Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.645695 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.645674298 podStartE2EDuration="9.645674298s" podCreationTimestamp="2025-12-11 18:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:18.644945608 +0000 UTC m=+1059.671189652" watchObservedRunningTime="2025-12-11 18:18:18.645674298 +0000 UTC m=+1059.671918342" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.703502 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-77d988cd48-2h828"] Dec 11 18:18:18 crc kubenswrapper[4877]: E1211 18:18:18.703997 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73110039-1660-4b03-9f07-2469ea7fe039" containerName="placement-db-sync" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.704017 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="73110039-1660-4b03-9f07-2469ea7fe039" containerName="placement-db-sync" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.704265 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="73110039-1660-4b03-9f07-2469ea7fe039" containerName="placement-db-sync" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.705521 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.709954 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8lj7g" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.710180 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.714535 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.714751 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.716883 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.747526 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-77d988cd48-2h828"] Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.907211 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt525\" (UniqueName: \"kubernetes.io/projected/5f1de16a-c21b-4876-99ca-60b34d1b7e75-kube-api-access-pt525\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.907403 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-combined-ca-bundle\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.907458 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-scripts\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.907648 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f1de16a-c21b-4876-99ca-60b34d1b7e75-logs\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.907701 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-internal-tls-certs\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.907765 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-public-tls-certs\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:18 crc kubenswrapper[4877]: I1211 18:18:18.907795 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-config-data\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.009555 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-combined-ca-bundle\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.009610 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-scripts\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.009671 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f1de16a-c21b-4876-99ca-60b34d1b7e75-logs\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.009697 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-internal-tls-certs\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.009728 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-public-tls-certs\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.009747 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-config-data\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.009823 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt525\" (UniqueName: \"kubernetes.io/projected/5f1de16a-c21b-4876-99ca-60b34d1b7e75-kube-api-access-pt525\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.011776 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f1de16a-c21b-4876-99ca-60b34d1b7e75-logs\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.018906 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-combined-ca-bundle\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.019201 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-scripts\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.019332 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-config-data\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.019854 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-public-tls-certs\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.022996 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1de16a-c21b-4876-99ca-60b34d1b7e75-internal-tls-certs\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.035027 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt525\" (UniqueName: \"kubernetes.io/projected/5f1de16a-c21b-4876-99ca-60b34d1b7e75-kube-api-access-pt525\") pod \"placement-77d988cd48-2h828\" (UID: \"5f1de16a-c21b-4876-99ca-60b34d1b7e75\") " pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.053791 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.077983 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.125536 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.134534 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.329560 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.590627 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.591016 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.639661 4877 generic.go:334] "Generic (PLEG): container finished" podID="48526cd8-976b-46b0-a73c-eb463d914400" containerID="81c20708170393996faddd6f9f34517fe11e4bbf32fe6ec9f6b20d95e84a4017" exitCode=0 Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.639765 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cblzw" event={"ID":"48526cd8-976b-46b0-a73c-eb463d914400","Type":"ContainerDied","Data":"81c20708170393996faddd6f9f34517fe11e4bbf32fe6ec9f6b20d95e84a4017"} Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.654663 4877 generic.go:334] "Generic (PLEG): container finished" podID="d642ce6b-7f43-402d-9658-c824289a232c" containerID="161e5fc70c65ed69d826316c94e9a492f621cdcd40f85194f164b2de2916017d" exitCode=0 Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.654818 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c7595" event={"ID":"d642ce6b-7f43-402d-9658-c824289a232c","Type":"ContainerDied","Data":"161e5fc70c65ed69d826316c94e9a492f621cdcd40f85194f164b2de2916017d"} Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.656118 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.656152 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.667469 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.689584 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 18:18:19 crc kubenswrapper[4877]: I1211 18:18:19.928520 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-77d988cd48-2h828"] Dec 11 18:18:19 crc kubenswrapper[4877]: W1211 18:18:19.934226 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f1de16a_c21b_4876_99ca_60b34d1b7e75.slice/crio-62d0967501453de16f9ce7c8f5408dc8cd7a3f9e579c327d315ad0a410011287 WatchSource:0}: Error finding container 62d0967501453de16f9ce7c8f5408dc8cd7a3f9e579c327d315ad0a410011287: Status 404 returned error can't find the container with id 62d0967501453de16f9ce7c8f5408dc8cd7a3f9e579c327d315ad0a410011287 Dec 11 18:18:20 crc kubenswrapper[4877]: I1211 18:18:20.663090 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:20 crc kubenswrapper[4877]: I1211 18:18:20.663739 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:20 crc kubenswrapper[4877]: I1211 18:18:20.676009 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d988cd48-2h828" event={"ID":"5f1de16a-c21b-4876-99ca-60b34d1b7e75","Type":"ContainerStarted","Data":"62d0967501453de16f9ce7c8f5408dc8cd7a3f9e579c327d315ad0a410011287"} Dec 11 18:18:20 crc kubenswrapper[4877]: I1211 18:18:20.676794 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 18:18:20 crc kubenswrapper[4877]: I1211 18:18:20.676867 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 18:18:20 crc kubenswrapper[4877]: I1211 18:18:20.927822 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:20 crc kubenswrapper[4877]: I1211 18:18:20.929547 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:21 crc kubenswrapper[4877]: I1211 18:18:21.615045 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:18:21 crc kubenswrapper[4877]: I1211 18:18:21.692271 4877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 18:18:22 crc kubenswrapper[4877]: I1211 18:18:22.102858 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 18:18:22 crc kubenswrapper[4877]: I1211 18:18:22.483047 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:18:23 crc kubenswrapper[4877]: I1211 18:18:23.142119 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 18:18:24 crc kubenswrapper[4877]: I1211 18:18:24.079543 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:24 crc kubenswrapper[4877]: I1211 18:18:24.139076 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-rq2nr"] Dec 11 18:18:24 crc kubenswrapper[4877]: I1211 18:18:24.139464 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" podUID="44966f79-b411-4c0a-9cc1-fe2576bf06c0" containerName="dnsmasq-dns" containerID="cri-o://1f2e24c59c929a410eb9008ece01b66a9fb7c5310433a2a1582056a5f0e50580" gracePeriod=10 Dec 11 18:18:24 crc kubenswrapper[4877]: I1211 18:18:24.732661 4877 generic.go:334] "Generic (PLEG): container finished" podID="44966f79-b411-4c0a-9cc1-fe2576bf06c0" containerID="1f2e24c59c929a410eb9008ece01b66a9fb7c5310433a2a1582056a5f0e50580" exitCode=0 Dec 11 18:18:24 crc kubenswrapper[4877]: I1211 18:18:24.732716 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" event={"ID":"44966f79-b411-4c0a-9cc1-fe2576bf06c0","Type":"ContainerDied","Data":"1f2e24c59c929a410eb9008ece01b66a9fb7c5310433a2a1582056a5f0e50580"} Dec 11 18:18:25 crc kubenswrapper[4877]: I1211 18:18:25.410881 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 18:18:25 crc kubenswrapper[4877]: I1211 18:18:25.616144 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 18:18:25 crc kubenswrapper[4877]: I1211 18:18:25.750808 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" podUID="44966f79-b411-4c0a-9cc1-fe2576bf06c0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.180819 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c7595" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.240974 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-scripts\") pod \"d642ce6b-7f43-402d-9658-c824289a232c\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.241098 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q8rr\" (UniqueName: \"kubernetes.io/projected/d642ce6b-7f43-402d-9658-c824289a232c-kube-api-access-2q8rr\") pod \"d642ce6b-7f43-402d-9658-c824289a232c\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.241181 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-credential-keys\") pod \"d642ce6b-7f43-402d-9658-c824289a232c\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.241362 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-combined-ca-bundle\") pod \"d642ce6b-7f43-402d-9658-c824289a232c\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.241463 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-config-data\") pod \"d642ce6b-7f43-402d-9658-c824289a232c\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.241513 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-fernet-keys\") pod \"d642ce6b-7f43-402d-9658-c824289a232c\" (UID: \"d642ce6b-7f43-402d-9658-c824289a232c\") " Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.250716 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-scripts" (OuterVolumeSpecName: "scripts") pod "d642ce6b-7f43-402d-9658-c824289a232c" (UID: "d642ce6b-7f43-402d-9658-c824289a232c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.255643 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d642ce6b-7f43-402d-9658-c824289a232c" (UID: "d642ce6b-7f43-402d-9658-c824289a232c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.255773 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d642ce6b-7f43-402d-9658-c824289a232c-kube-api-access-2q8rr" (OuterVolumeSpecName: "kube-api-access-2q8rr") pod "d642ce6b-7f43-402d-9658-c824289a232c" (UID: "d642ce6b-7f43-402d-9658-c824289a232c"). InnerVolumeSpecName "kube-api-access-2q8rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.262461 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d642ce6b-7f43-402d-9658-c824289a232c" (UID: "d642ce6b-7f43-402d-9658-c824289a232c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.289673 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-config-data" (OuterVolumeSpecName: "config-data") pod "d642ce6b-7f43-402d-9658-c824289a232c" (UID: "d642ce6b-7f43-402d-9658-c824289a232c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.291262 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d642ce6b-7f43-402d-9658-c824289a232c" (UID: "d642ce6b-7f43-402d-9658-c824289a232c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.344560 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q8rr\" (UniqueName: \"kubernetes.io/projected/d642ce6b-7f43-402d-9658-c824289a232c-kube-api-access-2q8rr\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.344589 4877 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.344598 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.344627 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.344639 4877 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.344648 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d642ce6b-7f43-402d-9658-c824289a232c-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.481019 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.552434 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zn7k\" (UniqueName: \"kubernetes.io/projected/44966f79-b411-4c0a-9cc1-fe2576bf06c0-kube-api-access-7zn7k\") pod \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.552681 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-dns-svc\") pod \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.552875 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-config\") pod \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.553041 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-dns-swift-storage-0\") pod \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.553115 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-ovsdbserver-nb\") pod \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.553262 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-ovsdbserver-sb\") pod \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\" (UID: \"44966f79-b411-4c0a-9cc1-fe2576bf06c0\") " Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.581913 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44966f79-b411-4c0a-9cc1-fe2576bf06c0-kube-api-access-7zn7k" (OuterVolumeSpecName: "kube-api-access-7zn7k") pod "44966f79-b411-4c0a-9cc1-fe2576bf06c0" (UID: "44966f79-b411-4c0a-9cc1-fe2576bf06c0"). InnerVolumeSpecName "kube-api-access-7zn7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.640669 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p857g"] Dec 11 18:18:26 crc kubenswrapper[4877]: E1211 18:18:26.641057 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d642ce6b-7f43-402d-9658-c824289a232c" containerName="keystone-bootstrap" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.641073 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d642ce6b-7f43-402d-9658-c824289a232c" containerName="keystone-bootstrap" Dec 11 18:18:26 crc kubenswrapper[4877]: E1211 18:18:26.641099 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44966f79-b411-4c0a-9cc1-fe2576bf06c0" containerName="init" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.641106 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="44966f79-b411-4c0a-9cc1-fe2576bf06c0" containerName="init" Dec 11 18:18:26 crc kubenswrapper[4877]: E1211 18:18:26.641131 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44966f79-b411-4c0a-9cc1-fe2576bf06c0" containerName="dnsmasq-dns" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.641138 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="44966f79-b411-4c0a-9cc1-fe2576bf06c0" containerName="dnsmasq-dns" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.641480 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="44966f79-b411-4c0a-9cc1-fe2576bf06c0" containerName="dnsmasq-dns" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.641496 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="d642ce6b-7f43-402d-9658-c824289a232c" containerName="keystone-bootstrap" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.642998 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.655342 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zn7k\" (UniqueName: \"kubernetes.io/projected/44966f79-b411-4c0a-9cc1-fe2576bf06c0-kube-api-access-7zn7k\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.729229 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p857g"] Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.757712 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6qlz\" (UniqueName: \"kubernetes.io/projected/66a63ded-b510-43e5-a29d-a0e65aa4c677-kube-api-access-z6qlz\") pod \"certified-operators-p857g\" (UID: \"66a63ded-b510-43e5-a29d-a0e65aa4c677\") " pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.757761 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a63ded-b510-43e5-a29d-a0e65aa4c677-utilities\") pod \"certified-operators-p857g\" (UID: \"66a63ded-b510-43e5-a29d-a0e65aa4c677\") " pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.757881 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a63ded-b510-43e5-a29d-a0e65aa4c677-catalog-content\") pod \"certified-operators-p857g\" (UID: \"66a63ded-b510-43e5-a29d-a0e65aa4c677\") " pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.800449 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cblzw" event={"ID":"48526cd8-976b-46b0-a73c-eb463d914400","Type":"ContainerStarted","Data":"8f11ff53b806c92f226bfca4cee5e7e8341ccadb4247005188cd6fe3a856dbb6"} Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.814709 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d988cd48-2h828" event={"ID":"5f1de16a-c21b-4876-99ca-60b34d1b7e75","Type":"ContainerStarted","Data":"95c4813b5145b1e99265ca72868078e969870f5ec9b9a8b8585a2b6152eae2ef"} Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.833742 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c7595" event={"ID":"d642ce6b-7f43-402d-9658-c824289a232c","Type":"ContainerDied","Data":"b2dde2a28378ae908b7d4318c20cacbe8cc1dc542a5bea1f5a23bb25b7d0d564"} Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.833812 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2dde2a28378ae908b7d4318c20cacbe8cc1dc542a5bea1f5a23bb25b7d0d564" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.833904 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c7595" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.848492 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cblzw" podStartSLOduration=27.001819183 podStartE2EDuration="37.848462376s" podCreationTimestamp="2025-12-11 18:17:49 +0000 UTC" firstStartedPulling="2025-12-11 18:18:15.346235162 +0000 UTC m=+1056.372479206" lastFinishedPulling="2025-12-11 18:18:26.192878355 +0000 UTC m=+1067.219122399" observedRunningTime="2025-12-11 18:18:26.838921303 +0000 UTC m=+1067.865165367" watchObservedRunningTime="2025-12-11 18:18:26.848462376 +0000 UTC m=+1067.874706430" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.856264 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.856201 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-rq2nr" event={"ID":"44966f79-b411-4c0a-9cc1-fe2576bf06c0","Type":"ContainerDied","Data":"7dca7e2312d77f8f77d622e89cb8c933b11fc69afadfdf396219302c514bd3cb"} Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.858311 4877 scope.go:117] "RemoveContainer" containerID="1f2e24c59c929a410eb9008ece01b66a9fb7c5310433a2a1582056a5f0e50580" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.877528 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a63ded-b510-43e5-a29d-a0e65aa4c677-catalog-content\") pod \"certified-operators-p857g\" (UID: \"66a63ded-b510-43e5-a29d-a0e65aa4c677\") " pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.877848 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6qlz\" (UniqueName: \"kubernetes.io/projected/66a63ded-b510-43e5-a29d-a0e65aa4c677-kube-api-access-z6qlz\") pod \"certified-operators-p857g\" (UID: \"66a63ded-b510-43e5-a29d-a0e65aa4c677\") " pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.877899 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a63ded-b510-43e5-a29d-a0e65aa4c677-utilities\") pod \"certified-operators-p857g\" (UID: \"66a63ded-b510-43e5-a29d-a0e65aa4c677\") " pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.879395 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a63ded-b510-43e5-a29d-a0e65aa4c677-catalog-content\") pod \"certified-operators-p857g\" (UID: \"66a63ded-b510-43e5-a29d-a0e65aa4c677\") " pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.897794 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a63ded-b510-43e5-a29d-a0e65aa4c677-utilities\") pod \"certified-operators-p857g\" (UID: \"66a63ded-b510-43e5-a29d-a0e65aa4c677\") " pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.926216 4877 scope.go:117] "RemoveContainer" containerID="d73af684d035c1f38fc4100e8c17989afc3e5c317f45a09627354ad6c588c8a2" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.927228 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6qlz\" (UniqueName: \"kubernetes.io/projected/66a63ded-b510-43e5-a29d-a0e65aa4c677-kube-api-access-z6qlz\") pod \"certified-operators-p857g\" (UID: \"66a63ded-b510-43e5-a29d-a0e65aa4c677\") " pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:26 crc kubenswrapper[4877]: I1211 18:18:26.980629 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.135231 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44966f79-b411-4c0a-9cc1-fe2576bf06c0" (UID: "44966f79-b411-4c0a-9cc1-fe2576bf06c0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.148343 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "44966f79-b411-4c0a-9cc1-fe2576bf06c0" (UID: "44966f79-b411-4c0a-9cc1-fe2576bf06c0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.150756 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44966f79-b411-4c0a-9cc1-fe2576bf06c0" (UID: "44966f79-b411-4c0a-9cc1-fe2576bf06c0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.168661 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44966f79-b411-4c0a-9cc1-fe2576bf06c0" (UID: "44966f79-b411-4c0a-9cc1-fe2576bf06c0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.193581 4877 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.193624 4877 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.193637 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.193648 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.196719 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-config" (OuterVolumeSpecName: "config") pod "44966f79-b411-4c0a-9cc1-fe2576bf06c0" (UID: "44966f79-b411-4c0a-9cc1-fe2576bf06c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.296709 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44966f79-b411-4c0a-9cc1-fe2576bf06c0-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.427438 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c7754d7b9-8ngjr"] Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.433471 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.442651 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.443925 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.459927 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c7754d7b9-8ngjr"] Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.526645 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-scripts\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.526714 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-config-data\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.526817 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-fernet-keys\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.526880 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-internal-tls-certs\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.526928 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-credential-keys\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.526989 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sfwh\" (UniqueName: \"kubernetes.io/projected/68a8f0df-c9a5-4812-860f-492cfeeae4bb-kube-api-access-4sfwh\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.527062 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-combined-ca-bundle\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.527115 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-public-tls-certs\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.627887 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-combined-ca-bundle\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.628042 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-rq2nr"] Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.634533 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-rq2nr"] Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.649522 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-public-tls-certs\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.649770 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-scripts\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.649800 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-config-data\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.649903 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-fernet-keys\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.649981 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-internal-tls-certs\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.650028 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-credential-keys\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.650104 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sfwh\" (UniqueName: \"kubernetes.io/projected/68a8f0df-c9a5-4812-860f-492cfeeae4bb-kube-api-access-4sfwh\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.652598 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-combined-ca-bundle\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.669226 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-config-data\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.679539 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-internal-tls-certs\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.691664 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-credential-keys\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.704255 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-public-tls-certs\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.704879 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-fernet-keys\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.707856 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sfwh\" (UniqueName: \"kubernetes.io/projected/68a8f0df-c9a5-4812-860f-492cfeeae4bb-kube-api-access-4sfwh\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.711656 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68a8f0df-c9a5-4812-860f-492cfeeae4bb-scripts\") pod \"keystone-c7754d7b9-8ngjr\" (UID: \"68a8f0df-c9a5-4812-860f-492cfeeae4bb\") " pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.821881 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p857g"] Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.822510 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.931594 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49332496-5e7e-426e-9d51-aee9479d8a0d","Type":"ContainerStarted","Data":"1d26658b41305506b3ae2332489c8b60b1ef8b3b31db1dd61617d8164552e29a"} Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.968642 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vtwqc" event={"ID":"2cc9dafb-2cd8-4a57-b7f2-941c39748675","Type":"ContainerStarted","Data":"aedb7b60063c44706ef8b6497108a13ee0dca8ce89858270b71540f5254d21d7"} Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.971442 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p857g" event={"ID":"66a63ded-b510-43e5-a29d-a0e65aa4c677","Type":"ContainerStarted","Data":"7b04937940a1af46a1b4b08e4ac66380dc3325fecd47106070442dfddf45f830"} Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.986236 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wgnnt" event={"ID":"fee9614f-acc1-4883-989e-6348978f4641","Type":"ContainerStarted","Data":"fdee2ff244b2f630c75b41972cb4f2cd012275be3c830ccfbf42d963a1a5ad56"} Dec 11 18:18:27 crc kubenswrapper[4877]: I1211 18:18:27.988558 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-vtwqc" podStartSLOduration=4.187490401 podStartE2EDuration="53.98853002s" podCreationTimestamp="2025-12-11 18:17:34 +0000 UTC" firstStartedPulling="2025-12-11 18:17:36.24873038 +0000 UTC m=+1017.274974414" lastFinishedPulling="2025-12-11 18:18:26.049769969 +0000 UTC m=+1067.076014033" observedRunningTime="2025-12-11 18:18:27.985922261 +0000 UTC m=+1069.012166295" watchObservedRunningTime="2025-12-11 18:18:27.98853002 +0000 UTC m=+1069.014774064" Dec 11 18:18:28 crc kubenswrapper[4877]: I1211 18:18:28.000022 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d988cd48-2h828" event={"ID":"5f1de16a-c21b-4876-99ca-60b34d1b7e75","Type":"ContainerStarted","Data":"6f75c96a3b8e00b44814bb802a50d6f0a3a3c9d4652e54949e1a862860755c80"} Dec 11 18:18:28 crc kubenswrapper[4877]: I1211 18:18:28.000061 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:28 crc kubenswrapper[4877]: I1211 18:18:28.000074 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:28 crc kubenswrapper[4877]: I1211 18:18:28.047431 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-wgnnt" podStartSLOduration=4.679817203 podStartE2EDuration="54.04740259s" podCreationTimestamp="2025-12-11 18:17:34 +0000 UTC" firstStartedPulling="2025-12-11 18:17:36.958744018 +0000 UTC m=+1017.984988062" lastFinishedPulling="2025-12-11 18:18:26.326329405 +0000 UTC m=+1067.352573449" observedRunningTime="2025-12-11 18:18:28.023816275 +0000 UTC m=+1069.050060329" watchObservedRunningTime="2025-12-11 18:18:28.04740259 +0000 UTC m=+1069.073646624" Dec 11 18:18:28 crc kubenswrapper[4877]: I1211 18:18:28.082010 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-77d988cd48-2h828" podStartSLOduration=10.081986978 podStartE2EDuration="10.081986978s" podCreationTimestamp="2025-12-11 18:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:28.070135354 +0000 UTC m=+1069.096379398" watchObservedRunningTime="2025-12-11 18:18:28.081986978 +0000 UTC m=+1069.108231022" Dec 11 18:18:28 crc kubenswrapper[4877]: I1211 18:18:28.451719 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c7754d7b9-8ngjr"] Dec 11 18:18:29 crc kubenswrapper[4877]: I1211 18:18:29.050456 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c7754d7b9-8ngjr" event={"ID":"68a8f0df-c9a5-4812-860f-492cfeeae4bb","Type":"ContainerStarted","Data":"9893afda4711c5717a1ca002c392a68ab8db1b3a02f9aed94e184fa3cf4a3ccc"} Dec 11 18:18:29 crc kubenswrapper[4877]: I1211 18:18:29.050882 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c7754d7b9-8ngjr" event={"ID":"68a8f0df-c9a5-4812-860f-492cfeeae4bb","Type":"ContainerStarted","Data":"f24506f25c977c3dae62834ca933d690f99bc6c47f0167f15d75d434c108fd75"} Dec 11 18:18:29 crc kubenswrapper[4877]: I1211 18:18:29.051787 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:29 crc kubenswrapper[4877]: I1211 18:18:29.062034 4877 generic.go:334] "Generic (PLEG): container finished" podID="66a63ded-b510-43e5-a29d-a0e65aa4c677" containerID="97c83372168ae2a40de53647621ed03dad864bea3a54451afd60178075ec04fe" exitCode=0 Dec 11 18:18:29 crc kubenswrapper[4877]: I1211 18:18:29.062234 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p857g" event={"ID":"66a63ded-b510-43e5-a29d-a0e65aa4c677","Type":"ContainerDied","Data":"97c83372168ae2a40de53647621ed03dad864bea3a54451afd60178075ec04fe"} Dec 11 18:18:29 crc kubenswrapper[4877]: I1211 18:18:29.082992 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c7754d7b9-8ngjr" podStartSLOduration=2.082969592 podStartE2EDuration="2.082969592s" podCreationTimestamp="2025-12-11 18:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:29.08176222 +0000 UTC m=+1070.108006264" watchObservedRunningTime="2025-12-11 18:18:29.082969592 +0000 UTC m=+1070.109213646" Dec 11 18:18:29 crc kubenswrapper[4877]: I1211 18:18:29.229988 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44966f79-b411-4c0a-9cc1-fe2576bf06c0" path="/var/lib/kubelet/pods/44966f79-b411-4c0a-9cc1-fe2576bf06c0/volumes" Dec 11 18:18:29 crc kubenswrapper[4877]: I1211 18:18:29.671150 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:18:29 crc kubenswrapper[4877]: I1211 18:18:29.672855 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:18:30 crc kubenswrapper[4877]: I1211 18:18:30.074904 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p857g" event={"ID":"66a63ded-b510-43e5-a29d-a0e65aa4c677","Type":"ContainerStarted","Data":"c80892ab79b1e58c49f35ca0165a72a6caaf363de18f76820b45507b16ed4cf8"} Dec 11 18:18:30 crc kubenswrapper[4877]: I1211 18:18:30.670097 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5757846754-b6r7j" podUID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Dec 11 18:18:30 crc kubenswrapper[4877]: I1211 18:18:30.730209 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-cblzw" podUID="48526cd8-976b-46b0-a73c-eb463d914400" containerName="registry-server" probeResult="failure" output=< Dec 11 18:18:30 crc kubenswrapper[4877]: timeout: failed to connect service ":50051" within 1s Dec 11 18:18:30 crc kubenswrapper[4877]: > Dec 11 18:18:30 crc kubenswrapper[4877]: I1211 18:18:30.932798 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c9dbfd97b-ck4jv" podUID="2afc51b6-dafc-47ce-875a-3a6249f69b47" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Dec 11 18:18:31 crc kubenswrapper[4877]: I1211 18:18:31.111959 4877 generic.go:334] "Generic (PLEG): container finished" podID="66a63ded-b510-43e5-a29d-a0e65aa4c677" containerID="c80892ab79b1e58c49f35ca0165a72a6caaf363de18f76820b45507b16ed4cf8" exitCode=0 Dec 11 18:18:31 crc kubenswrapper[4877]: I1211 18:18:31.112012 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p857g" event={"ID":"66a63ded-b510-43e5-a29d-a0e65aa4c677","Type":"ContainerDied","Data":"c80892ab79b1e58c49f35ca0165a72a6caaf363de18f76820b45507b16ed4cf8"} Dec 11 18:18:32 crc kubenswrapper[4877]: I1211 18:18:32.123553 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p857g" event={"ID":"66a63ded-b510-43e5-a29d-a0e65aa4c677","Type":"ContainerStarted","Data":"e4a706efafee47b14eac3b4b01b6ec450d83793b3c197788aea7c55f147cd65c"} Dec 11 18:18:32 crc kubenswrapper[4877]: I1211 18:18:32.150284 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p857g" podStartSLOduration=3.387883991 podStartE2EDuration="6.150251607s" podCreationTimestamp="2025-12-11 18:18:26 +0000 UTC" firstStartedPulling="2025-12-11 18:18:29.064584904 +0000 UTC m=+1070.090828948" lastFinishedPulling="2025-12-11 18:18:31.82695251 +0000 UTC m=+1072.853196564" observedRunningTime="2025-12-11 18:18:32.140791056 +0000 UTC m=+1073.167035120" watchObservedRunningTime="2025-12-11 18:18:32.150251607 +0000 UTC m=+1073.176495651" Dec 11 18:18:33 crc kubenswrapper[4877]: I1211 18:18:33.134571 4877 generic.go:334] "Generic (PLEG): container finished" podID="fee9614f-acc1-4883-989e-6348978f4641" containerID="fdee2ff244b2f630c75b41972cb4f2cd012275be3c830ccfbf42d963a1a5ad56" exitCode=0 Dec 11 18:18:33 crc kubenswrapper[4877]: I1211 18:18:33.134669 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wgnnt" event={"ID":"fee9614f-acc1-4883-989e-6348978f4641","Type":"ContainerDied","Data":"fdee2ff244b2f630c75b41972cb4f2cd012275be3c830ccfbf42d963a1a5ad56"} Dec 11 18:18:34 crc kubenswrapper[4877]: I1211 18:18:34.576926 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wgnnt" Dec 11 18:18:34 crc kubenswrapper[4877]: I1211 18:18:34.671402 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkwvw\" (UniqueName: \"kubernetes.io/projected/fee9614f-acc1-4883-989e-6348978f4641-kube-api-access-xkwvw\") pod \"fee9614f-acc1-4883-989e-6348978f4641\" (UID: \"fee9614f-acc1-4883-989e-6348978f4641\") " Dec 11 18:18:34 crc kubenswrapper[4877]: I1211 18:18:34.671685 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fee9614f-acc1-4883-989e-6348978f4641-db-sync-config-data\") pod \"fee9614f-acc1-4883-989e-6348978f4641\" (UID: \"fee9614f-acc1-4883-989e-6348978f4641\") " Dec 11 18:18:34 crc kubenswrapper[4877]: I1211 18:18:34.671937 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee9614f-acc1-4883-989e-6348978f4641-combined-ca-bundle\") pod \"fee9614f-acc1-4883-989e-6348978f4641\" (UID: \"fee9614f-acc1-4883-989e-6348978f4641\") " Dec 11 18:18:34 crc kubenswrapper[4877]: I1211 18:18:34.680484 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee9614f-acc1-4883-989e-6348978f4641-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fee9614f-acc1-4883-989e-6348978f4641" (UID: "fee9614f-acc1-4883-989e-6348978f4641"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:34 crc kubenswrapper[4877]: I1211 18:18:34.691636 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee9614f-acc1-4883-989e-6348978f4641-kube-api-access-xkwvw" (OuterVolumeSpecName: "kube-api-access-xkwvw") pod "fee9614f-acc1-4883-989e-6348978f4641" (UID: "fee9614f-acc1-4883-989e-6348978f4641"). InnerVolumeSpecName "kube-api-access-xkwvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:34 crc kubenswrapper[4877]: I1211 18:18:34.730471 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee9614f-acc1-4883-989e-6348978f4641-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fee9614f-acc1-4883-989e-6348978f4641" (UID: "fee9614f-acc1-4883-989e-6348978f4641"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:34 crc kubenswrapper[4877]: I1211 18:18:34.774506 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkwvw\" (UniqueName: \"kubernetes.io/projected/fee9614f-acc1-4883-989e-6348978f4641-kube-api-access-xkwvw\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:34 crc kubenswrapper[4877]: I1211 18:18:34.774569 4877 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fee9614f-acc1-4883-989e-6348978f4641-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:34 crc kubenswrapper[4877]: I1211 18:18:34.774601 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee9614f-acc1-4883-989e-6348978f4641-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.157615 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wgnnt" event={"ID":"fee9614f-acc1-4883-989e-6348978f4641","Type":"ContainerDied","Data":"90592653e3751c1db6c94a2a95c4ba34f8b66325e581f193b211d748d52bd964"} Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.158120 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90592653e3751c1db6c94a2a95c4ba34f8b66325e581f193b211d748d52bd964" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.158207 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wgnnt" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.167823 4877 generic.go:334] "Generic (PLEG): container finished" podID="2cc9dafb-2cd8-4a57-b7f2-941c39748675" containerID="aedb7b60063c44706ef8b6497108a13ee0dca8ce89858270b71540f5254d21d7" exitCode=0 Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.167892 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vtwqc" event={"ID":"2cc9dafb-2cd8-4a57-b7f2-941c39748675","Type":"ContainerDied","Data":"aedb7b60063c44706ef8b6497108a13ee0dca8ce89858270b71540f5254d21d7"} Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.366834 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-85794d5dd7-rmjkp"] Dec 11 18:18:35 crc kubenswrapper[4877]: E1211 18:18:35.372050 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee9614f-acc1-4883-989e-6348978f4641" containerName="barbican-db-sync" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.372124 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee9614f-acc1-4883-989e-6348978f4641" containerName="barbican-db-sync" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.372359 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee9614f-acc1-4883-989e-6348978f4641" containerName="barbican-db-sync" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.373387 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.378954 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.379584 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4d64w" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.379759 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.394142 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-85794d5dd7-rmjkp"] Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.472919 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-55b59dbf9b-n74fk"] Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.477780 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.486255 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.503053 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vpg7\" (UniqueName: \"kubernetes.io/projected/79f3b97f-f3f1-4547-81e4-e2c7c833745e-kube-api-access-5vpg7\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.503207 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f3b97f-f3f1-4547-81e4-e2c7c833745e-logs\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.503281 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79f3b97f-f3f1-4547-81e4-e2c7c833745e-config-data-custom\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.503743 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f3b97f-f3f1-4547-81e4-e2c7c833745e-config-data\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.504306 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f3b97f-f3f1-4547-81e4-e2c7c833745e-combined-ca-bundle\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.507278 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-55b59dbf9b-n74fk"] Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.566025 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-5zbxq"] Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.568667 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.594433 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-5zbxq"] Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.606810 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60930296-787e-4fea-8180-8b7d3aba29b8-config-data\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.606897 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vpg7\" (UniqueName: \"kubernetes.io/projected/79f3b97f-f3f1-4547-81e4-e2c7c833745e-kube-api-access-5vpg7\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.606955 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60930296-787e-4fea-8180-8b7d3aba29b8-config-data-custom\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.606985 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60930296-787e-4fea-8180-8b7d3aba29b8-logs\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.607007 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f3b97f-f3f1-4547-81e4-e2c7c833745e-logs\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.607042 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79f3b97f-f3f1-4547-81e4-e2c7c833745e-config-data-custom\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.607075 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f3b97f-f3f1-4547-81e4-e2c7c833745e-config-data\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.607113 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60930296-787e-4fea-8180-8b7d3aba29b8-combined-ca-bundle\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.607155 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jstdt\" (UniqueName: \"kubernetes.io/projected/60930296-787e-4fea-8180-8b7d3aba29b8-kube-api-access-jstdt\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.607202 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f3b97f-f3f1-4547-81e4-e2c7c833745e-combined-ca-bundle\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.610748 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f3b97f-f3f1-4547-81e4-e2c7c833745e-logs\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.615171 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79f3b97f-f3f1-4547-81e4-e2c7c833745e-config-data-custom\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.616342 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f3b97f-f3f1-4547-81e4-e2c7c833745e-combined-ca-bundle\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.629702 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f3b97f-f3f1-4547-81e4-e2c7c833745e-config-data\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.651179 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vpg7\" (UniqueName: \"kubernetes.io/projected/79f3b97f-f3f1-4547-81e4-e2c7c833745e-kube-api-access-5vpg7\") pod \"barbican-worker-85794d5dd7-rmjkp\" (UID: \"79f3b97f-f3f1-4547-81e4-e2c7c833745e\") " pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.709051 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-85566dfdb6-gtbzq"] Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.711325 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.712601 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.712673 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60930296-787e-4fea-8180-8b7d3aba29b8-combined-ca-bundle\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.712707 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jstdt\" (UniqueName: \"kubernetes.io/projected/60930296-787e-4fea-8180-8b7d3aba29b8-kube-api-access-jstdt\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.713606 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.713752 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.713828 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-config\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.713946 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60930296-787e-4fea-8180-8b7d3aba29b8-config-data\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.714012 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.714198 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60930296-787e-4fea-8180-8b7d3aba29b8-config-data-custom\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.714224 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqrf6\" (UniqueName: \"kubernetes.io/projected/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-kube-api-access-sqrf6\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.714263 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60930296-787e-4fea-8180-8b7d3aba29b8-logs\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.715006 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60930296-787e-4fea-8180-8b7d3aba29b8-logs\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.719071 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.726223 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-85794d5dd7-rmjkp" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.728331 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60930296-787e-4fea-8180-8b7d3aba29b8-config-data\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.741312 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60930296-787e-4fea-8180-8b7d3aba29b8-combined-ca-bundle\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.761480 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85566dfdb6-gtbzq"] Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.787179 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60930296-787e-4fea-8180-8b7d3aba29b8-config-data-custom\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.812160 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jstdt\" (UniqueName: \"kubernetes.io/projected/60930296-787e-4fea-8180-8b7d3aba29b8-kube-api-access-jstdt\") pod \"barbican-keystone-listener-55b59dbf9b-n74fk\" (UID: \"60930296-787e-4fea-8180-8b7d3aba29b8\") " pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.815293 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.817021 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqrf6\" (UniqueName: \"kubernetes.io/projected/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-kube-api-access-sqrf6\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.835112 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-logs\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.835259 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-config-data\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.835500 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.835583 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-config-data-custom\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.835729 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.835836 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.835941 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-config\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.836025 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-combined-ca-bundle\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.836160 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q8j4\" (UniqueName: \"kubernetes.io/projected/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-kube-api-access-8q8j4\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.836259 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.837231 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.837256 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.837923 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.838479 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-config\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.839030 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.888409 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqrf6\" (UniqueName: \"kubernetes.io/projected/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-kube-api-access-sqrf6\") pod \"dnsmasq-dns-85ff748b95-5zbxq\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.899814 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.938714 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-logs\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.938779 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-config-data\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.938861 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-config-data-custom\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.938921 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-combined-ca-bundle\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.938959 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q8j4\" (UniqueName: \"kubernetes.io/projected/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-kube-api-access-8q8j4\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.939857 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-logs\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.949243 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-combined-ca-bundle\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.949716 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-config-data\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.953155 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-config-data-custom\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.965103 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q8j4\" (UniqueName: \"kubernetes.io/projected/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-kube-api-access-8q8j4\") pod \"barbican-api-85566dfdb6-gtbzq\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:35 crc kubenswrapper[4877]: I1211 18:18:35.968815 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:36 crc kubenswrapper[4877]: I1211 18:18:36.874229 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="dfe20951-de12-4570-8cf0-f1c8a14e275d" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.141:9292/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 18:18:36 crc kubenswrapper[4877]: I1211 18:18:36.874249 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="dfe20951-de12-4570-8cf0-f1c8a14e275d" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.141:9292/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 18:18:36 crc kubenswrapper[4877]: I1211 18:18:36.948683 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-55b59dbf9b-n74fk"] Dec 11 18:18:36 crc kubenswrapper[4877]: W1211 18:18:36.965889 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60930296_787e_4fea_8180_8b7d3aba29b8.slice/crio-4d3926c5b49fe467b70d612636ec9e27c4e291fa82eb4ba090010aaf3bdbe932 WatchSource:0}: Error finding container 4d3926c5b49fe467b70d612636ec9e27c4e291fa82eb4ba090010aaf3bdbe932: Status 404 returned error can't find the container with id 4d3926c5b49fe467b70d612636ec9e27c4e291fa82eb4ba090010aaf3bdbe932 Dec 11 18:18:36 crc kubenswrapper[4877]: I1211 18:18:36.982624 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:36 crc kubenswrapper[4877]: I1211 18:18:36.982669 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:36 crc kubenswrapper[4877]: I1211 18:18:36.995212 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-85794d5dd7-rmjkp"] Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.027525 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-5zbxq"] Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.053936 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.101241 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.137199 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85566dfdb6-gtbzq"] Dec 11 18:18:37 crc kubenswrapper[4877]: W1211 18:18:37.147628 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33d11af1_3fbc_4e51_a965_c1e1e3dbc853.slice/crio-3540cd211cc7b0dc4d6238400f6f1c963765051e4f3c13247c697b2075c288cb WatchSource:0}: Error finding container 3540cd211cc7b0dc4d6238400f6f1c963765051e4f3c13247c697b2075c288cb: Status 404 returned error can't find the container with id 3540cd211cc7b0dc4d6238400f6f1c963765051e4f3c13247c697b2075c288cb Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.247651 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85566dfdb6-gtbzq" event={"ID":"33d11af1-3fbc-4e51-a965-c1e1e3dbc853","Type":"ContainerStarted","Data":"3540cd211cc7b0dc4d6238400f6f1c963765051e4f3c13247c697b2075c288cb"} Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.247722 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" event={"ID":"60930296-787e-4fea-8180-8b7d3aba29b8","Type":"ContainerStarted","Data":"4d3926c5b49fe467b70d612636ec9e27c4e291fa82eb4ba090010aaf3bdbe932"} Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.247742 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-85794d5dd7-rmjkp" event={"ID":"79f3b97f-f3f1-4547-81e4-e2c7c833745e","Type":"ContainerStarted","Data":"90b4d862725b4efb0c7aa5c2923b57323e68a7b9f15b0653293e8e329a1cf312"} Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.247758 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" event={"ID":"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8","Type":"ContainerStarted","Data":"05616a7e24b5eb69c4e2b229a9d112af82e9b8fb1a8acf8cbb3988b2aca5cc9d"} Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.250681 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vtwqc" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.250844 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vtwqc" event={"ID":"2cc9dafb-2cd8-4a57-b7f2-941c39748675","Type":"ContainerDied","Data":"b1f6eefb813a79f4d82ffefe5d8d88d69603ecb8af90ea17a7ffead56dd21927"} Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.250871 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1f6eefb813a79f4d82ffefe5d8d88d69603ecb8af90ea17a7ffead56dd21927" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.275767 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-config-data\") pod \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.276092 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-db-sync-config-data\") pod \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.276163 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-scripts\") pod \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.276216 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z9l9\" (UniqueName: \"kubernetes.io/projected/2cc9dafb-2cd8-4a57-b7f2-941c39748675-kube-api-access-2z9l9\") pod \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.276322 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cc9dafb-2cd8-4a57-b7f2-941c39748675-etc-machine-id\") pod \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.276392 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-combined-ca-bundle\") pod \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\" (UID: \"2cc9dafb-2cd8-4a57-b7f2-941c39748675\") " Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.277983 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cc9dafb-2cd8-4a57-b7f2-941c39748675-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2cc9dafb-2cd8-4a57-b7f2-941c39748675" (UID: "2cc9dafb-2cd8-4a57-b7f2-941c39748675"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.283173 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.142:9292/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.283294 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="de1228cc-3f6d-49a9-b97a-7a0e79668c1a" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.142:9292/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.285562 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-scripts" (OuterVolumeSpecName: "scripts") pod "2cc9dafb-2cd8-4a57-b7f2-941c39748675" (UID: "2cc9dafb-2cd8-4a57-b7f2-941c39748675"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.285666 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc9dafb-2cd8-4a57-b7f2-941c39748675-kube-api-access-2z9l9" (OuterVolumeSpecName: "kube-api-access-2z9l9") pod "2cc9dafb-2cd8-4a57-b7f2-941c39748675" (UID: "2cc9dafb-2cd8-4a57-b7f2-941c39748675"). InnerVolumeSpecName "kube-api-access-2z9l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.286627 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2cc9dafb-2cd8-4a57-b7f2-941c39748675" (UID: "2cc9dafb-2cd8-4a57-b7f2-941c39748675"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.324530 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.341960 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cc9dafb-2cd8-4a57-b7f2-941c39748675" (UID: "2cc9dafb-2cd8-4a57-b7f2-941c39748675"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.385667 4877 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cc9dafb-2cd8-4a57-b7f2-941c39748675-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.385700 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.389272 4877 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.389290 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.389300 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z9l9\" (UniqueName: \"kubernetes.io/projected/2cc9dafb-2cd8-4a57-b7f2-941c39748675-kube-api-access-2z9l9\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.394742 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-config-data" (OuterVolumeSpecName: "config-data") pod "2cc9dafb-2cd8-4a57-b7f2-941c39748675" (UID: "2cc9dafb-2cd8-4a57-b7f2-941c39748675"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.412794 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p857g"] Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.498630 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc9dafb-2cd8-4a57-b7f2-941c39748675-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.547747 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 18:18:37 crc kubenswrapper[4877]: E1211 18:18:37.548191 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc9dafb-2cd8-4a57-b7f2-941c39748675" containerName="cinder-db-sync" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.548216 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc9dafb-2cd8-4a57-b7f2-941c39748675" containerName="cinder-db-sync" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.548446 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc9dafb-2cd8-4a57-b7f2-941c39748675" containerName="cinder-db-sync" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.551743 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.567674 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6m4g8" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.568213 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.568532 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.568705 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.583004 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.617987 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-5zbxq"] Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.705086 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-54z2g"] Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.707179 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.707300 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.707437 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsjb8\" (UniqueName: \"kubernetes.io/projected/c4747402-a420-4cd2-84eb-6775467f2db5-kube-api-access-xsjb8\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.707478 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4747402-a420-4cd2-84eb-6775467f2db5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.707502 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-scripts\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.707520 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.707542 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-config-data\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.744928 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-54z2g"] Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.812766 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.812864 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.812892 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spwkt\" (UniqueName: \"kubernetes.io/projected/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-kube-api-access-spwkt\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.812931 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsjb8\" (UniqueName: \"kubernetes.io/projected/c4747402-a420-4cd2-84eb-6775467f2db5-kube-api-access-xsjb8\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.812964 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-config\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.812983 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4747402-a420-4cd2-84eb-6775467f2db5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.813007 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-scripts\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.813029 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.813050 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-config-data\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.813082 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.813115 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.813144 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.813573 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4747402-a420-4cd2-84eb-6775467f2db5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.819684 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.820027 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-config-data\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.824716 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-scripts\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.836929 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.852476 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.854269 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.856845 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.868157 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsjb8\" (UniqueName: \"kubernetes.io/projected/c4747402-a420-4cd2-84eb-6775467f2db5-kube-api-access-xsjb8\") pod \"cinder-scheduler-0\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.872724 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.915724 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.915774 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-config\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.915862 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.915927 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.915965 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.916040 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.916073 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spwkt\" (UniqueName: \"kubernetes.io/projected/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-kube-api-access-spwkt\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.917675 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.917868 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.918544 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-config\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.919659 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.919724 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:37 crc kubenswrapper[4877]: I1211 18:18:37.942896 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spwkt\" (UniqueName: \"kubernetes.io/projected/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-kube-api-access-spwkt\") pod \"dnsmasq-dns-5c9776ccc5-54z2g\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.018061 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-config-data-custom\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.018127 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-scripts\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.018160 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-config-data\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.018203 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcs8x\" (UniqueName: \"kubernetes.io/projected/e1776d1d-2543-4923-8d58-08610435d2fe-kube-api-access-jcs8x\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.018231 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1776d1d-2543-4923-8d58-08610435d2fe-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.018697 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.018903 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1776d1d-2543-4923-8d58-08610435d2fe-logs\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.053702 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.075122 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.125161 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1776d1d-2543-4923-8d58-08610435d2fe-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.125271 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.125355 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1776d1d-2543-4923-8d58-08610435d2fe-logs\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.125420 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-config-data-custom\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.131640 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1776d1d-2543-4923-8d58-08610435d2fe-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.133191 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.141363 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1776d1d-2543-4923-8d58-08610435d2fe-logs\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.142595 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-scripts\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.142693 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-config-data\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.142814 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcs8x\" (UniqueName: \"kubernetes.io/projected/e1776d1d-2543-4923-8d58-08610435d2fe-kube-api-access-jcs8x\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.149240 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-config-data-custom\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.155011 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-config-data\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.157392 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-scripts\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.166206 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcs8x\" (UniqueName: \"kubernetes.io/projected/e1776d1d-2543-4923-8d58-08610435d2fe-kube-api-access-jcs8x\") pod \"cinder-api-0\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.306745 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85566dfdb6-gtbzq" event={"ID":"33d11af1-3fbc-4e51-a965-c1e1e3dbc853","Type":"ContainerStarted","Data":"e178a27a915999436077fc4241b255d6ee6e89b7673a67fc84465a5d9e133666"} Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.306802 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85566dfdb6-gtbzq" event={"ID":"33d11af1-3fbc-4e51-a965-c1e1e3dbc853","Type":"ContainerStarted","Data":"47771ab0875eefa0999d9a495880c844d0b59d45be0b863dffe8bb1f61eb72f4"} Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.307132 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.307220 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.316999 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.347326 4877 generic.go:334] "Generic (PLEG): container finished" podID="7ee4de25-c46b-49ba-ab97-6bf23d4b77e8" containerID="04547e0b9ee4a23ea8c35ed993436985c69553c098fea7fb00da751463c207c1" exitCode=0 Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.347683 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" event={"ID":"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8","Type":"ContainerDied","Data":"04547e0b9ee4a23ea8c35ed993436985c69553c098fea7fb00da751463c207c1"} Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.359682 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-85566dfdb6-gtbzq" podStartSLOduration=3.359654517 podStartE2EDuration="3.359654517s" podCreationTimestamp="2025-12-11 18:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:38.34019299 +0000 UTC m=+1079.366437034" watchObservedRunningTime="2025-12-11 18:18:38.359654517 +0000 UTC m=+1079.385898561" Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.533319 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 18:18:38 crc kubenswrapper[4877]: I1211 18:18:38.792652 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-54z2g"] Dec 11 18:18:39 crc kubenswrapper[4877]: I1211 18:18:39.095325 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 18:18:39 crc kubenswrapper[4877]: E1211 18:18:39.118676 4877 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 11 18:18:39 crc kubenswrapper[4877]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 11 18:18:39 crc kubenswrapper[4877]: > podSandboxID="05616a7e24b5eb69c4e2b229a9d112af82e9b8fb1a8acf8cbb3988b2aca5cc9d" Dec 11 18:18:39 crc kubenswrapper[4877]: E1211 18:18:39.119015 4877 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 11 18:18:39 crc kubenswrapper[4877]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch57ch5c5hcch589hf7h577h659h96h5c8h5b4h55fhbbh667h565h5bchcbh58dh7dh5bch586h56ch574h598h67dh5c8h56dh8bh574h564hbch7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqrf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-85ff748b95-5zbxq_openstack(7ee4de25-c46b-49ba-ab97-6bf23d4b77e8): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 11 18:18:39 crc kubenswrapper[4877]: > logger="UnhandledError" Dec 11 18:18:39 crc kubenswrapper[4877]: E1211 18:18:39.122435 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" podUID="7ee4de25-c46b-49ba-ab97-6bf23d4b77e8" Dec 11 18:18:39 crc kubenswrapper[4877]: I1211 18:18:39.421727 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4747402-a420-4cd2-84eb-6775467f2db5","Type":"ContainerStarted","Data":"aeb4ab1aba4e7653927043174fe9bd2b71aa2778f4d5ae6be9e20affb1511837"} Dec 11 18:18:39 crc kubenswrapper[4877]: I1211 18:18:39.444602 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" event={"ID":"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23","Type":"ContainerStarted","Data":"9b399a0956f55fe9cca580428c235833753bd28d98922721e1de236179fc78ac"} Dec 11 18:18:39 crc kubenswrapper[4877]: I1211 18:18:39.445617 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" event={"ID":"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23","Type":"ContainerStarted","Data":"4c39fbaac7103894703a33f9b9202f816a62ffb12b88d6047fcbacd9726792bb"} Dec 11 18:18:39 crc kubenswrapper[4877]: I1211 18:18:39.467638 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1776d1d-2543-4923-8d58-08610435d2fe","Type":"ContainerStarted","Data":"ea7e7a78a23735c60bb5fc1643a3b618c9f543e24b8207ec833274ddd4b4eb75"} Dec 11 18:18:39 crc kubenswrapper[4877]: I1211 18:18:39.467819 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p857g" podUID="66a63ded-b510-43e5-a29d-a0e65aa4c677" containerName="registry-server" containerID="cri-o://e4a706efafee47b14eac3b4b01b6ec450d83793b3c197788aea7c55f147cd65c" gracePeriod=2 Dec 11 18:18:39 crc kubenswrapper[4877]: I1211 18:18:39.784409 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:18:39 crc kubenswrapper[4877]: I1211 18:18:39.885282 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.297342 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.457390 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-dns-svc\") pod \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.457539 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-ovsdbserver-sb\") pod \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.457591 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-config\") pod \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.464838 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqrf6\" (UniqueName: \"kubernetes.io/projected/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-kube-api-access-sqrf6\") pod \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.464965 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-ovsdbserver-nb\") pod \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.465043 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-dns-swift-storage-0\") pod \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\" (UID: \"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8\") " Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.497178 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-kube-api-access-sqrf6" (OuterVolumeSpecName: "kube-api-access-sqrf6") pod "7ee4de25-c46b-49ba-ab97-6bf23d4b77e8" (UID: "7ee4de25-c46b-49ba-ab97-6bf23d4b77e8"). InnerVolumeSpecName "kube-api-access-sqrf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.574869 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqrf6\" (UniqueName: \"kubernetes.io/projected/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-kube-api-access-sqrf6\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.604055 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ee4de25-c46b-49ba-ab97-6bf23d4b77e8" (UID: "7ee4de25-c46b-49ba-ab97-6bf23d4b77e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.637242 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" event={"ID":"7ee4de25-c46b-49ba-ab97-6bf23d4b77e8","Type":"ContainerDied","Data":"05616a7e24b5eb69c4e2b229a9d112af82e9b8fb1a8acf8cbb3988b2aca5cc9d"} Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.637553 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-5zbxq" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.637920 4877 scope.go:117] "RemoveContainer" containerID="04547e0b9ee4a23ea8c35ed993436985c69553c098fea7fb00da751463c207c1" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.637621 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ee4de25-c46b-49ba-ab97-6bf23d4b77e8" (UID: "7ee4de25-c46b-49ba-ab97-6bf23d4b77e8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.680031 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5757846754-b6r7j" podUID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.681296 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.681339 4877 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.686242 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7ee4de25-c46b-49ba-ab97-6bf23d4b77e8" (UID: "7ee4de25-c46b-49ba-ab97-6bf23d4b77e8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.693652 4877 generic.go:334] "Generic (PLEG): container finished" podID="a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" containerID="9b399a0956f55fe9cca580428c235833753bd28d98922721e1de236179fc78ac" exitCode=0 Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.693789 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" event={"ID":"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23","Type":"ContainerDied","Data":"9b399a0956f55fe9cca580428c235833753bd28d98922721e1de236179fc78ac"} Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.697877 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ee4de25-c46b-49ba-ab97-6bf23d4b77e8" (UID: "7ee4de25-c46b-49ba-ab97-6bf23d4b77e8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.707659 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-config" (OuterVolumeSpecName: "config") pod "7ee4de25-c46b-49ba-ab97-6bf23d4b77e8" (UID: "7ee4de25-c46b-49ba-ab97-6bf23d4b77e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.714599 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cblzw"] Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.730827 4877 generic.go:334] "Generic (PLEG): container finished" podID="66a63ded-b510-43e5-a29d-a0e65aa4c677" containerID="e4a706efafee47b14eac3b4b01b6ec450d83793b3c197788aea7c55f147cd65c" exitCode=0 Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.730940 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p857g" event={"ID":"66a63ded-b510-43e5-a29d-a0e65aa4c677","Type":"ContainerDied","Data":"e4a706efafee47b14eac3b4b01b6ec450d83793b3c197788aea7c55f147cd65c"} Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.736363 4877 generic.go:334] "Generic (PLEG): container finished" podID="ef4bb395-d817-4cf1-a7b9-692cf1831b79" containerID="d13cee107d6d9af0059b0e8acbbf4a909d66620c1f749c2efe338829041bd046" exitCode=137 Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.736414 4877 generic.go:334] "Generic (PLEG): container finished" podID="ef4bb395-d817-4cf1-a7b9-692cf1831b79" containerID="983cc5afc116fbe82d19b2e4bc15438ca9aac2cdbd0769e5d1319a3b34437c8f" exitCode=137 Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.736919 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b78ffc5c-cmjhz" event={"ID":"ef4bb395-d817-4cf1-a7b9-692cf1831b79","Type":"ContainerDied","Data":"d13cee107d6d9af0059b0e8acbbf4a909d66620c1f749c2efe338829041bd046"} Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.736957 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b78ffc5c-cmjhz" event={"ID":"ef4bb395-d817-4cf1-a7b9-692cf1831b79","Type":"ContainerDied","Data":"983cc5afc116fbe82d19b2e4bc15438ca9aac2cdbd0769e5d1319a3b34437c8f"} Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.782969 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.783009 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.783024 4877 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:40 crc kubenswrapper[4877]: I1211 18:18:40.928801 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c9dbfd97b-ck4jv" podUID="2afc51b6-dafc-47ce-875a-3a6249f69b47" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Dec 11 18:18:41 crc kubenswrapper[4877]: I1211 18:18:41.035705 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-5zbxq"] Dec 11 18:18:41 crc kubenswrapper[4877]: I1211 18:18:41.048508 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-5zbxq"] Dec 11 18:18:41 crc kubenswrapper[4877]: I1211 18:18:41.230533 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ee4de25-c46b-49ba-ab97-6bf23d4b77e8" path="/var/lib/kubelet/pods/7ee4de25-c46b-49ba-ab97-6bf23d4b77e8/volumes" Dec 11 18:18:41 crc kubenswrapper[4877]: I1211 18:18:41.246973 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 18:18:41 crc kubenswrapper[4877]: I1211 18:18:41.694196 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7b4dd6dd9-6mv2v" Dec 11 18:18:41 crc kubenswrapper[4877]: I1211 18:18:41.756659 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4747402-a420-4cd2-84eb-6775467f2db5","Type":"ContainerStarted","Data":"6410f0c0ffd885c4b82c14dd717e532d8f40771b1e0bcda024b48b722bc50ed1"} Dec 11 18:18:41 crc kubenswrapper[4877]: I1211 18:18:41.772333 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cblzw" podUID="48526cd8-976b-46b0-a73c-eb463d914400" containerName="registry-server" containerID="cri-o://8f11ff53b806c92f226bfca4cee5e7e8341ccadb4247005188cd6fe3a856dbb6" gracePeriod=2 Dec 11 18:18:41 crc kubenswrapper[4877]: I1211 18:18:41.772539 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1776d1d-2543-4923-8d58-08610435d2fe","Type":"ContainerStarted","Data":"f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0"} Dec 11 18:18:41 crc kubenswrapper[4877]: I1211 18:18:41.772637 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66dd954c9d-qfqj2"] Dec 11 18:18:41 crc kubenswrapper[4877]: I1211 18:18:41.773191 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66dd954c9d-qfqj2" podUID="45886494-4c47-4ebd-8531-4895a7f7a2ed" containerName="neutron-api" containerID="cri-o://a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4" gracePeriod=30 Dec 11 18:18:41 crc kubenswrapper[4877]: I1211 18:18:41.773296 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66dd954c9d-qfqj2" podUID="45886494-4c47-4ebd-8531-4895a7f7a2ed" containerName="neutron-httpd" containerID="cri-o://c29e682b5666dff5e2d8b0797fc619b82551dce679dfd0c767481fc702e2908a" gracePeriod=30 Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.354199 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.384029 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.528563 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef4bb395-d817-4cf1-a7b9-692cf1831b79-horizon-secret-key\") pod \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.528648 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbxbx\" (UniqueName: \"kubernetes.io/projected/ef4bb395-d817-4cf1-a7b9-692cf1831b79-kube-api-access-bbxbx\") pod \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.528690 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a63ded-b510-43e5-a29d-a0e65aa4c677-utilities\") pod \"66a63ded-b510-43e5-a29d-a0e65aa4c677\" (UID: \"66a63ded-b510-43e5-a29d-a0e65aa4c677\") " Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.528745 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef4bb395-d817-4cf1-a7b9-692cf1831b79-config-data\") pod \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.528787 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef4bb395-d817-4cf1-a7b9-692cf1831b79-scripts\") pod \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.528833 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a63ded-b510-43e5-a29d-a0e65aa4c677-catalog-content\") pod \"66a63ded-b510-43e5-a29d-a0e65aa4c677\" (UID: \"66a63ded-b510-43e5-a29d-a0e65aa4c677\") " Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.528878 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef4bb395-d817-4cf1-a7b9-692cf1831b79-logs\") pod \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\" (UID: \"ef4bb395-d817-4cf1-a7b9-692cf1831b79\") " Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.528911 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6qlz\" (UniqueName: \"kubernetes.io/projected/66a63ded-b510-43e5-a29d-a0e65aa4c677-kube-api-access-z6qlz\") pod \"66a63ded-b510-43e5-a29d-a0e65aa4c677\" (UID: \"66a63ded-b510-43e5-a29d-a0e65aa4c677\") " Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.535308 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef4bb395-d817-4cf1-a7b9-692cf1831b79-logs" (OuterVolumeSpecName: "logs") pod "ef4bb395-d817-4cf1-a7b9-692cf1831b79" (UID: "ef4bb395-d817-4cf1-a7b9-692cf1831b79"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.550190 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a63ded-b510-43e5-a29d-a0e65aa4c677-utilities" (OuterVolumeSpecName: "utilities") pod "66a63ded-b510-43e5-a29d-a0e65aa4c677" (UID: "66a63ded-b510-43e5-a29d-a0e65aa4c677"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.563137 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a63ded-b510-43e5-a29d-a0e65aa4c677-kube-api-access-z6qlz" (OuterVolumeSpecName: "kube-api-access-z6qlz") pod "66a63ded-b510-43e5-a29d-a0e65aa4c677" (UID: "66a63ded-b510-43e5-a29d-a0e65aa4c677"). InnerVolumeSpecName "kube-api-access-z6qlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.563226 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef4bb395-d817-4cf1-a7b9-692cf1831b79-kube-api-access-bbxbx" (OuterVolumeSpecName: "kube-api-access-bbxbx") pod "ef4bb395-d817-4cf1-a7b9-692cf1831b79" (UID: "ef4bb395-d817-4cf1-a7b9-692cf1831b79"). InnerVolumeSpecName "kube-api-access-bbxbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.627602 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef4bb395-d817-4cf1-a7b9-692cf1831b79-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ef4bb395-d817-4cf1-a7b9-692cf1831b79" (UID: "ef4bb395-d817-4cf1-a7b9-692cf1831b79"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.635431 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef4bb395-d817-4cf1-a7b9-692cf1831b79-scripts" (OuterVolumeSpecName: "scripts") pod "ef4bb395-d817-4cf1-a7b9-692cf1831b79" (UID: "ef4bb395-d817-4cf1-a7b9-692cf1831b79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.636420 4877 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef4bb395-d817-4cf1-a7b9-692cf1831b79-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.636452 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbxbx\" (UniqueName: \"kubernetes.io/projected/ef4bb395-d817-4cf1-a7b9-692cf1831b79-kube-api-access-bbxbx\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.636466 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a63ded-b510-43e5-a29d-a0e65aa4c677-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.636476 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef4bb395-d817-4cf1-a7b9-692cf1831b79-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.636486 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef4bb395-d817-4cf1-a7b9-692cf1831b79-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.636496 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6qlz\" (UniqueName: \"kubernetes.io/projected/66a63ded-b510-43e5-a29d-a0e65aa4c677-kube-api-access-z6qlz\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.645175 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a63ded-b510-43e5-a29d-a0e65aa4c677-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66a63ded-b510-43e5-a29d-a0e65aa4c677" (UID: "66a63ded-b510-43e5-a29d-a0e65aa4c677"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.690262 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef4bb395-d817-4cf1-a7b9-692cf1831b79-config-data" (OuterVolumeSpecName: "config-data") pod "ef4bb395-d817-4cf1-a7b9-692cf1831b79" (UID: "ef4bb395-d817-4cf1-a7b9-692cf1831b79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.745043 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef4bb395-d817-4cf1-a7b9-692cf1831b79-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.745099 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a63ded-b510-43e5-a29d-a0e65aa4c677-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.745140 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.851222 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48526cd8-976b-46b0-a73c-eb463d914400-catalog-content\") pod \"48526cd8-976b-46b0-a73c-eb463d914400\" (UID: \"48526cd8-976b-46b0-a73c-eb463d914400\") " Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.851915 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnt5n\" (UniqueName: \"kubernetes.io/projected/48526cd8-976b-46b0-a73c-eb463d914400-kube-api-access-vnt5n\") pod \"48526cd8-976b-46b0-a73c-eb463d914400\" (UID: \"48526cd8-976b-46b0-a73c-eb463d914400\") " Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.852089 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48526cd8-976b-46b0-a73c-eb463d914400-utilities\") pod \"48526cd8-976b-46b0-a73c-eb463d914400\" (UID: \"48526cd8-976b-46b0-a73c-eb463d914400\") " Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.853536 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48526cd8-976b-46b0-a73c-eb463d914400-utilities" (OuterVolumeSpecName: "utilities") pod "48526cd8-976b-46b0-a73c-eb463d914400" (UID: "48526cd8-976b-46b0-a73c-eb463d914400"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.858320 4877 generic.go:334] "Generic (PLEG): container finished" podID="45886494-4c47-4ebd-8531-4895a7f7a2ed" containerID="c29e682b5666dff5e2d8b0797fc619b82551dce679dfd0c767481fc702e2908a" exitCode=0 Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.858451 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66dd954c9d-qfqj2" event={"ID":"45886494-4c47-4ebd-8531-4895a7f7a2ed","Type":"ContainerDied","Data":"c29e682b5666dff5e2d8b0797fc619b82551dce679dfd0c767481fc702e2908a"} Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.926913 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48526cd8-976b-46b0-a73c-eb463d914400-kube-api-access-vnt5n" (OuterVolumeSpecName: "kube-api-access-vnt5n") pod "48526cd8-976b-46b0-a73c-eb463d914400" (UID: "48526cd8-976b-46b0-a73c-eb463d914400"). InnerVolumeSpecName "kube-api-access-vnt5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.929122 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" event={"ID":"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23","Type":"ContainerStarted","Data":"37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941"} Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.929613 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.950501 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p857g" event={"ID":"66a63ded-b510-43e5-a29d-a0e65aa4c677","Type":"ContainerDied","Data":"7b04937940a1af46a1b4b08e4ac66380dc3325fecd47106070442dfddf45f830"} Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.950556 4877 scope.go:117] "RemoveContainer" containerID="e4a706efafee47b14eac3b4b01b6ec450d83793b3c197788aea7c55f147cd65c" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.950729 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p857g" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.954837 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48526cd8-976b-46b0-a73c-eb463d914400-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.954883 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnt5n\" (UniqueName: \"kubernetes.io/projected/48526cd8-976b-46b0-a73c-eb463d914400-kube-api-access-vnt5n\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:42 crc kubenswrapper[4877]: I1211 18:18:42.976152 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" podStartSLOduration=5.976124109 podStartE2EDuration="5.976124109s" podCreationTimestamp="2025-12-11 18:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:42.966542815 +0000 UTC m=+1083.992786869" watchObservedRunningTime="2025-12-11 18:18:42.976124109 +0000 UTC m=+1084.002368153" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.001024 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b78ffc5c-cmjhz" event={"ID":"ef4bb395-d817-4cf1-a7b9-692cf1831b79","Type":"ContainerDied","Data":"beb84b7d97e05160b03fa69f55ba5bc3e075108faef238ad4b2df5e39002d9a2"} Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.001169 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b78ffc5c-cmjhz" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.014629 4877 scope.go:117] "RemoveContainer" containerID="c80892ab79b1e58c49f35ca0165a72a6caaf363de18f76820b45507b16ed4cf8" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.037748 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48526cd8-976b-46b0-a73c-eb463d914400-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48526cd8-976b-46b0-a73c-eb463d914400" (UID: "48526cd8-976b-46b0-a73c-eb463d914400"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.041898 4877 generic.go:334] "Generic (PLEG): container finished" podID="48526cd8-976b-46b0-a73c-eb463d914400" containerID="8f11ff53b806c92f226bfca4cee5e7e8341ccadb4247005188cd6fe3a856dbb6" exitCode=0 Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.041966 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p857g"] Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.042013 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cblzw" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.042034 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cblzw" event={"ID":"48526cd8-976b-46b0-a73c-eb463d914400","Type":"ContainerDied","Data":"8f11ff53b806c92f226bfca4cee5e7e8341ccadb4247005188cd6fe3a856dbb6"} Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.042069 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cblzw" event={"ID":"48526cd8-976b-46b0-a73c-eb463d914400","Type":"ContainerDied","Data":"997e954834e59c67ef8ea77682378e963da89d2a4e6156efa4811cdec48550da"} Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.056750 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48526cd8-976b-46b0-a73c-eb463d914400-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.081885 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p857g"] Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.132566 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b78ffc5c-cmjhz"] Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.144844 4877 scope.go:117] "RemoveContainer" containerID="97c83372168ae2a40de53647621ed03dad864bea3a54451afd60178075ec04fe" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.176448 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b78ffc5c-cmjhz"] Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.206127 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cblzw"] Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.281468 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a63ded-b510-43e5-a29d-a0e65aa4c677" path="/var/lib/kubelet/pods/66a63ded-b510-43e5-a29d-a0e65aa4c677/volumes" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.285212 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef4bb395-d817-4cf1-a7b9-692cf1831b79" path="/var/lib/kubelet/pods/ef4bb395-d817-4cf1-a7b9-692cf1831b79/volumes" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.286191 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cblzw"] Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.322211 4877 scope.go:117] "RemoveContainer" containerID="d13cee107d6d9af0059b0e8acbbf4a909d66620c1f749c2efe338829041bd046" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.615124 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5f986c9df4-vbvbf"] Dec 11 18:18:43 crc kubenswrapper[4877]: E1211 18:18:43.624061 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48526cd8-976b-46b0-a73c-eb463d914400" containerName="registry-server" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.624098 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="48526cd8-976b-46b0-a73c-eb463d914400" containerName="registry-server" Dec 11 18:18:43 crc kubenswrapper[4877]: E1211 18:18:43.624114 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee4de25-c46b-49ba-ab97-6bf23d4b77e8" containerName="init" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.624123 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee4de25-c46b-49ba-ab97-6bf23d4b77e8" containerName="init" Dec 11 18:18:43 crc kubenswrapper[4877]: E1211 18:18:43.624135 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a63ded-b510-43e5-a29d-a0e65aa4c677" containerName="extract-content" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.624142 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a63ded-b510-43e5-a29d-a0e65aa4c677" containerName="extract-content" Dec 11 18:18:43 crc kubenswrapper[4877]: E1211 18:18:43.624158 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4bb395-d817-4cf1-a7b9-692cf1831b79" containerName="horizon-log" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.624166 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4bb395-d817-4cf1-a7b9-692cf1831b79" containerName="horizon-log" Dec 11 18:18:43 crc kubenswrapper[4877]: E1211 18:18:43.624181 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48526cd8-976b-46b0-a73c-eb463d914400" containerName="extract-utilities" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.624187 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="48526cd8-976b-46b0-a73c-eb463d914400" containerName="extract-utilities" Dec 11 18:18:43 crc kubenswrapper[4877]: E1211 18:18:43.624202 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4bb395-d817-4cf1-a7b9-692cf1831b79" containerName="horizon" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.624207 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4bb395-d817-4cf1-a7b9-692cf1831b79" containerName="horizon" Dec 11 18:18:43 crc kubenswrapper[4877]: E1211 18:18:43.624240 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48526cd8-976b-46b0-a73c-eb463d914400" containerName="extract-content" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.624246 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="48526cd8-976b-46b0-a73c-eb463d914400" containerName="extract-content" Dec 11 18:18:43 crc kubenswrapper[4877]: E1211 18:18:43.624262 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a63ded-b510-43e5-a29d-a0e65aa4c677" containerName="extract-utilities" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.624269 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a63ded-b510-43e5-a29d-a0e65aa4c677" containerName="extract-utilities" Dec 11 18:18:43 crc kubenswrapper[4877]: E1211 18:18:43.624282 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a63ded-b510-43e5-a29d-a0e65aa4c677" containerName="registry-server" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.624289 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a63ded-b510-43e5-a29d-a0e65aa4c677" containerName="registry-server" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.624510 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a63ded-b510-43e5-a29d-a0e65aa4c677" containerName="registry-server" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.624524 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef4bb395-d817-4cf1-a7b9-692cf1831b79" containerName="horizon-log" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.624536 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee4de25-c46b-49ba-ab97-6bf23d4b77e8" containerName="init" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.624545 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef4bb395-d817-4cf1-a7b9-692cf1831b79" containerName="horizon" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.624555 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="48526cd8-976b-46b0-a73c-eb463d914400" containerName="registry-server" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.626049 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.628884 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f986c9df4-vbvbf"] Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.636091 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.636959 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.732999 4877 scope.go:117] "RemoveContainer" containerID="983cc5afc116fbe82d19b2e4bc15438ca9aac2cdbd0769e5d1319a3b34437c8f" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.787972 4877 scope.go:117] "RemoveContainer" containerID="8f11ff53b806c92f226bfca4cee5e7e8341ccadb4247005188cd6fe3a856dbb6" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.788306 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-public-tls-certs\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.788360 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-internal-tls-certs\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.788417 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-combined-ca-bundle\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.788451 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-logs\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.788472 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-config-data\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.788498 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-config-data-custom\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.788619 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr4fs\" (UniqueName: \"kubernetes.io/projected/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-kube-api-access-nr4fs\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.821790 4877 scope.go:117] "RemoveContainer" containerID="81c20708170393996faddd6f9f34517fe11e4bbf32fe6ec9f6b20d95e84a4017" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.849704 4877 scope.go:117] "RemoveContainer" containerID="38355607cad6edd868b6d91f71e2f18f8997b270ff553d36cb199e2d11ca342e" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.875433 4877 scope.go:117] "RemoveContainer" containerID="8f11ff53b806c92f226bfca4cee5e7e8341ccadb4247005188cd6fe3a856dbb6" Dec 11 18:18:43 crc kubenswrapper[4877]: E1211 18:18:43.876993 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f11ff53b806c92f226bfca4cee5e7e8341ccadb4247005188cd6fe3a856dbb6\": container with ID starting with 8f11ff53b806c92f226bfca4cee5e7e8341ccadb4247005188cd6fe3a856dbb6 not found: ID does not exist" containerID="8f11ff53b806c92f226bfca4cee5e7e8341ccadb4247005188cd6fe3a856dbb6" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.877139 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f11ff53b806c92f226bfca4cee5e7e8341ccadb4247005188cd6fe3a856dbb6"} err="failed to get container status \"8f11ff53b806c92f226bfca4cee5e7e8341ccadb4247005188cd6fe3a856dbb6\": rpc error: code = NotFound desc = could not find container \"8f11ff53b806c92f226bfca4cee5e7e8341ccadb4247005188cd6fe3a856dbb6\": container with ID starting with 8f11ff53b806c92f226bfca4cee5e7e8341ccadb4247005188cd6fe3a856dbb6 not found: ID does not exist" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.877261 4877 scope.go:117] "RemoveContainer" containerID="81c20708170393996faddd6f9f34517fe11e4bbf32fe6ec9f6b20d95e84a4017" Dec 11 18:18:43 crc kubenswrapper[4877]: E1211 18:18:43.877717 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81c20708170393996faddd6f9f34517fe11e4bbf32fe6ec9f6b20d95e84a4017\": container with ID starting with 81c20708170393996faddd6f9f34517fe11e4bbf32fe6ec9f6b20d95e84a4017 not found: ID does not exist" containerID="81c20708170393996faddd6f9f34517fe11e4bbf32fe6ec9f6b20d95e84a4017" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.877753 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c20708170393996faddd6f9f34517fe11e4bbf32fe6ec9f6b20d95e84a4017"} err="failed to get container status \"81c20708170393996faddd6f9f34517fe11e4bbf32fe6ec9f6b20d95e84a4017\": rpc error: code = NotFound desc = could not find container \"81c20708170393996faddd6f9f34517fe11e4bbf32fe6ec9f6b20d95e84a4017\": container with ID starting with 81c20708170393996faddd6f9f34517fe11e4bbf32fe6ec9f6b20d95e84a4017 not found: ID does not exist" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.877806 4877 scope.go:117] "RemoveContainer" containerID="38355607cad6edd868b6d91f71e2f18f8997b270ff553d36cb199e2d11ca342e" Dec 11 18:18:43 crc kubenswrapper[4877]: E1211 18:18:43.878238 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38355607cad6edd868b6d91f71e2f18f8997b270ff553d36cb199e2d11ca342e\": container with ID starting with 38355607cad6edd868b6d91f71e2f18f8997b270ff553d36cb199e2d11ca342e not found: ID does not exist" containerID="38355607cad6edd868b6d91f71e2f18f8997b270ff553d36cb199e2d11ca342e" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.878268 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38355607cad6edd868b6d91f71e2f18f8997b270ff553d36cb199e2d11ca342e"} err="failed to get container status \"38355607cad6edd868b6d91f71e2f18f8997b270ff553d36cb199e2d11ca342e\": rpc error: code = NotFound desc = could not find container \"38355607cad6edd868b6d91f71e2f18f8997b270ff553d36cb199e2d11ca342e\": container with ID starting with 38355607cad6edd868b6d91f71e2f18f8997b270ff553d36cb199e2d11ca342e not found: ID does not exist" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.890435 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr4fs\" (UniqueName: \"kubernetes.io/projected/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-kube-api-access-nr4fs\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.890851 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-public-tls-certs\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.890969 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-internal-tls-certs\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.891043 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-combined-ca-bundle\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.891120 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-logs\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.891189 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-config-data\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.891782 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-config-data-custom\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.891738 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-logs\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.900978 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-public-tls-certs\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.901761 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-config-data\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.912002 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr4fs\" (UniqueName: \"kubernetes.io/projected/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-kube-api-access-nr4fs\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.912006 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-config-data-custom\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.912757 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-combined-ca-bundle\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.915418 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03-internal-tls-certs\") pod \"barbican-api-5f986c9df4-vbvbf\" (UID: \"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03\") " pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:43 crc kubenswrapper[4877]: I1211 18:18:43.981863 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:44 crc kubenswrapper[4877]: I1211 18:18:44.056719 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1776d1d-2543-4923-8d58-08610435d2fe","Type":"ContainerStarted","Data":"11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b"} Dec 11 18:18:44 crc kubenswrapper[4877]: I1211 18:18:44.057307 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 11 18:18:44 crc kubenswrapper[4877]: I1211 18:18:44.056922 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e1776d1d-2543-4923-8d58-08610435d2fe" containerName="cinder-api" containerID="cri-o://11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b" gracePeriod=30 Dec 11 18:18:44 crc kubenswrapper[4877]: I1211 18:18:44.056848 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e1776d1d-2543-4923-8d58-08610435d2fe" containerName="cinder-api-log" containerID="cri-o://f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0" gracePeriod=30 Dec 11 18:18:44 crc kubenswrapper[4877]: I1211 18:18:44.071859 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" event={"ID":"60930296-787e-4fea-8180-8b7d3aba29b8","Type":"ContainerStarted","Data":"bcf52e66a430424f486474aad1a23b01a9404f5c9a0cc9d9c2b37d65ba15ce8d"} Dec 11 18:18:44 crc kubenswrapper[4877]: I1211 18:18:44.071947 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" event={"ID":"60930296-787e-4fea-8180-8b7d3aba29b8","Type":"ContainerStarted","Data":"7e23492db8f481fe5710f82479dd5ad34eceeb13eb684265691666e7df40fe2b"} Dec 11 18:18:44 crc kubenswrapper[4877]: I1211 18:18:44.077677 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-85794d5dd7-rmjkp" event={"ID":"79f3b97f-f3f1-4547-81e4-e2c7c833745e","Type":"ContainerStarted","Data":"1f52bc03a541442cc906ab0ac21d29138c01c609b2a7064f184feeae32ac7b9a"} Dec 11 18:18:44 crc kubenswrapper[4877]: I1211 18:18:44.077743 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-85794d5dd7-rmjkp" event={"ID":"79f3b97f-f3f1-4547-81e4-e2c7c833745e","Type":"ContainerStarted","Data":"ae0466fcf90ce242138c7dc13a26b96ade39a442bf2cc76d95ec89185b5d13b8"} Dec 11 18:18:44 crc kubenswrapper[4877]: I1211 18:18:44.086780 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4747402-a420-4cd2-84eb-6775467f2db5","Type":"ContainerStarted","Data":"31db71442d2051d799f8df1a027e595a0e90abd9575aecc694719d3a79283176"} Dec 11 18:18:44 crc kubenswrapper[4877]: I1211 18:18:44.114677 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.114652061 podStartE2EDuration="7.114652061s" podCreationTimestamp="2025-12-11 18:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:44.104560164 +0000 UTC m=+1085.130804208" watchObservedRunningTime="2025-12-11 18:18:44.114652061 +0000 UTC m=+1085.140896105" Dec 11 18:18:44 crc kubenswrapper[4877]: I1211 18:18:44.151470 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-85794d5dd7-rmjkp" podStartSLOduration=3.880744849 podStartE2EDuration="9.151439717s" podCreationTimestamp="2025-12-11 18:18:35 +0000 UTC" firstStartedPulling="2025-12-11 18:18:36.994523553 +0000 UTC m=+1078.020767597" lastFinishedPulling="2025-12-11 18:18:42.265218421 +0000 UTC m=+1083.291462465" observedRunningTime="2025-12-11 18:18:44.148224432 +0000 UTC m=+1085.174468486" watchObservedRunningTime="2025-12-11 18:18:44.151439717 +0000 UTC m=+1085.177683761" Dec 11 18:18:44 crc kubenswrapper[4877]: I1211 18:18:44.172097 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-55b59dbf9b-n74fk" podStartSLOduration=3.83444898 podStartE2EDuration="9.172069414s" podCreationTimestamp="2025-12-11 18:18:35 +0000 UTC" firstStartedPulling="2025-12-11 18:18:36.967725242 +0000 UTC m=+1077.993969286" lastFinishedPulling="2025-12-11 18:18:42.305345676 +0000 UTC m=+1083.331589720" observedRunningTime="2025-12-11 18:18:44.169987729 +0000 UTC m=+1085.196231783" watchObservedRunningTime="2025-12-11 18:18:44.172069414 +0000 UTC m=+1085.198313458" Dec 11 18:18:44 crc kubenswrapper[4877]: I1211 18:18:44.684848 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f986c9df4-vbvbf"] Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.038812 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.067408 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-scripts\") pod \"e1776d1d-2543-4923-8d58-08610435d2fe\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.067525 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-combined-ca-bundle\") pod \"e1776d1d-2543-4923-8d58-08610435d2fe\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.067559 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcs8x\" (UniqueName: \"kubernetes.io/projected/e1776d1d-2543-4923-8d58-08610435d2fe-kube-api-access-jcs8x\") pod \"e1776d1d-2543-4923-8d58-08610435d2fe\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.067632 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-config-data-custom\") pod \"e1776d1d-2543-4923-8d58-08610435d2fe\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.067784 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-config-data\") pod \"e1776d1d-2543-4923-8d58-08610435d2fe\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.067816 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1776d1d-2543-4923-8d58-08610435d2fe-etc-machine-id\") pod \"e1776d1d-2543-4923-8d58-08610435d2fe\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.067901 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1776d1d-2543-4923-8d58-08610435d2fe-logs\") pod \"e1776d1d-2543-4923-8d58-08610435d2fe\" (UID: \"e1776d1d-2543-4923-8d58-08610435d2fe\") " Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.068802 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1776d1d-2543-4923-8d58-08610435d2fe-logs" (OuterVolumeSpecName: "logs") pod "e1776d1d-2543-4923-8d58-08610435d2fe" (UID: "e1776d1d-2543-4923-8d58-08610435d2fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.070530 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1776d1d-2543-4923-8d58-08610435d2fe-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e1776d1d-2543-4923-8d58-08610435d2fe" (UID: "e1776d1d-2543-4923-8d58-08610435d2fe"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.077087 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1776d1d-2543-4923-8d58-08610435d2fe-kube-api-access-jcs8x" (OuterVolumeSpecName: "kube-api-access-jcs8x") pod "e1776d1d-2543-4923-8d58-08610435d2fe" (UID: "e1776d1d-2543-4923-8d58-08610435d2fe"). InnerVolumeSpecName "kube-api-access-jcs8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.078037 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e1776d1d-2543-4923-8d58-08610435d2fe" (UID: "e1776d1d-2543-4923-8d58-08610435d2fe"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.078190 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-scripts" (OuterVolumeSpecName: "scripts") pod "e1776d1d-2543-4923-8d58-08610435d2fe" (UID: "e1776d1d-2543-4923-8d58-08610435d2fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.176493 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.176919 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcs8x\" (UniqueName: \"kubernetes.io/projected/e1776d1d-2543-4923-8d58-08610435d2fe-kube-api-access-jcs8x\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.176932 4877 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.176944 4877 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1776d1d-2543-4923-8d58-08610435d2fe-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.176955 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1776d1d-2543-4923-8d58-08610435d2fe-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.202537 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1776d1d-2543-4923-8d58-08610435d2fe" (UID: "e1776d1d-2543-4923-8d58-08610435d2fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.206107 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-config-data" (OuterVolumeSpecName: "config-data") pod "e1776d1d-2543-4923-8d58-08610435d2fe" (UID: "e1776d1d-2543-4923-8d58-08610435d2fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.214010 4877 generic.go:334] "Generic (PLEG): container finished" podID="e1776d1d-2543-4923-8d58-08610435d2fe" containerID="11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b" exitCode=0 Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.214055 4877 generic.go:334] "Generic (PLEG): container finished" podID="e1776d1d-2543-4923-8d58-08610435d2fe" containerID="f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0" exitCode=143 Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.214145 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1776d1d-2543-4923-8d58-08610435d2fe","Type":"ContainerDied","Data":"11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b"} Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.214184 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1776d1d-2543-4923-8d58-08610435d2fe","Type":"ContainerDied","Data":"f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0"} Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.214195 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1776d1d-2543-4923-8d58-08610435d2fe","Type":"ContainerDied","Data":"ea7e7a78a23735c60bb5fc1643a3b618c9f543e24b8207ec833274ddd4b4eb75"} Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.214213 4877 scope.go:117] "RemoveContainer" containerID="11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.214433 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.247685 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48526cd8-976b-46b0-a73c-eb463d914400" path="/var/lib/kubelet/pods/48526cd8-976b-46b0-a73c-eb463d914400/volumes" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.248913 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f986c9df4-vbvbf" event={"ID":"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03","Type":"ContainerStarted","Data":"92418faea1fef520145eea64855567274a3ccdfbc13aba8648ebe8ca4c000519"} Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.248943 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f986c9df4-vbvbf" event={"ID":"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03","Type":"ContainerStarted","Data":"18d807f0a15ed26b27a30fec8cb0f1cc2bd45fd8f3d2bea33dc94fa41b67a7b1"} Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.265167 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.906719436 podStartE2EDuration="8.265138221s" podCreationTimestamp="2025-12-11 18:18:37 +0000 UTC" firstStartedPulling="2025-12-11 18:18:38.517554015 +0000 UTC m=+1079.543798059" lastFinishedPulling="2025-12-11 18:18:39.8759728 +0000 UTC m=+1080.902216844" observedRunningTime="2025-12-11 18:18:45.263487667 +0000 UTC m=+1086.289731711" watchObservedRunningTime="2025-12-11 18:18:45.265138221 +0000 UTC m=+1086.291382255" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.279063 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.279109 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1776d1d-2543-4923-8d58-08610435d2fe-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.316286 4877 scope.go:117] "RemoveContainer" containerID="f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.393712 4877 scope.go:117] "RemoveContainer" containerID="11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b" Dec 11 18:18:45 crc kubenswrapper[4877]: E1211 18:18:45.395613 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b\": container with ID starting with 11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b not found: ID does not exist" containerID="11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.396053 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b"} err="failed to get container status \"11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b\": rpc error: code = NotFound desc = could not find container \"11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b\": container with ID starting with 11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b not found: ID does not exist" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.396083 4877 scope.go:117] "RemoveContainer" containerID="f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0" Dec 11 18:18:45 crc kubenswrapper[4877]: E1211 18:18:45.406681 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0\": container with ID starting with f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0 not found: ID does not exist" containerID="f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.406743 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0"} err="failed to get container status \"f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0\": rpc error: code = NotFound desc = could not find container \"f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0\": container with ID starting with f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0 not found: ID does not exist" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.406782 4877 scope.go:117] "RemoveContainer" containerID="11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b" Dec 11 18:18:45 crc kubenswrapper[4877]: W1211 18:18:45.407867 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1776d1d_2543_4923_8d58_08610435d2fe.slice/crio-f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0.scope WatchSource:0}: Error finding container f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0: Status 404 returned error can't find the container with id f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0 Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.410583 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b"} err="failed to get container status \"11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b\": rpc error: code = NotFound desc = could not find container \"11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b\": container with ID starting with 11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b not found: ID does not exist" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.410637 4877 scope.go:117] "RemoveContainer" containerID="f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0" Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.411889 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0"} err="failed to get container status \"f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0\": rpc error: code = NotFound desc = could not find container \"f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0\": container with ID starting with f565501df558bcc76bf26b41c3c4f21bac4859d77b56c5112af9bc0a799b5dc0 not found: ID does not exist" Dec 11 18:18:45 crc kubenswrapper[4877]: W1211 18:18:45.415740 4877 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1776d1d_2543_4923_8d58_08610435d2fe.slice/crio-conmon-11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1776d1d_2543_4923_8d58_08610435d2fe.slice/crio-conmon-11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b.scope: no such file or directory Dec 11 18:18:45 crc kubenswrapper[4877]: W1211 18:18:45.415784 4877 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1776d1d_2543_4923_8d58_08610435d2fe.slice/crio-11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1776d1d_2543_4923_8d58_08610435d2fe.slice/crio-11ac96bce0df5335a5d9148b40c57e11016db49b072a69de0c13ecaab372067b.scope: no such file or directory Dec 11 18:18:45 crc kubenswrapper[4877]: I1211 18:18:45.885830 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.025194 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12291457-a2ba-4bfa-8c21-fdf315e8dc12-scripts\") pod \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.025240 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/12291457-a2ba-4bfa-8c21-fdf315e8dc12-horizon-secret-key\") pod \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.025267 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12291457-a2ba-4bfa-8c21-fdf315e8dc12-config-data\") pod \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.025314 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8km5\" (UniqueName: \"kubernetes.io/projected/12291457-a2ba-4bfa-8c21-fdf315e8dc12-kube-api-access-d8km5\") pod \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.025405 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12291457-a2ba-4bfa-8c21-fdf315e8dc12-logs\") pod \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\" (UID: \"12291457-a2ba-4bfa-8c21-fdf315e8dc12\") " Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.026464 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12291457-a2ba-4bfa-8c21-fdf315e8dc12-logs" (OuterVolumeSpecName: "logs") pod "12291457-a2ba-4bfa-8c21-fdf315e8dc12" (UID: "12291457-a2ba-4bfa-8c21-fdf315e8dc12"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.053753 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12291457-a2ba-4bfa-8c21-fdf315e8dc12-kube-api-access-d8km5" (OuterVolumeSpecName: "kube-api-access-d8km5") pod "12291457-a2ba-4bfa-8c21-fdf315e8dc12" (UID: "12291457-a2ba-4bfa-8c21-fdf315e8dc12"). InnerVolumeSpecName "kube-api-access-d8km5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.054492 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12291457-a2ba-4bfa-8c21-fdf315e8dc12-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "12291457-a2ba-4bfa-8c21-fdf315e8dc12" (UID: "12291457-a2ba-4bfa-8c21-fdf315e8dc12"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.066168 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12291457-a2ba-4bfa-8c21-fdf315e8dc12-config-data" (OuterVolumeSpecName: "config-data") pod "12291457-a2ba-4bfa-8c21-fdf315e8dc12" (UID: "12291457-a2ba-4bfa-8c21-fdf315e8dc12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.099843 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12291457-a2ba-4bfa-8c21-fdf315e8dc12-scripts" (OuterVolumeSpecName: "scripts") pod "12291457-a2ba-4bfa-8c21-fdf315e8dc12" (UID: "12291457-a2ba-4bfa-8c21-fdf315e8dc12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.128648 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12291457-a2ba-4bfa-8c21-fdf315e8dc12-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.129039 4877 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/12291457-a2ba-4bfa-8c21-fdf315e8dc12-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.129052 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12291457-a2ba-4bfa-8c21-fdf315e8dc12-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.129062 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8km5\" (UniqueName: \"kubernetes.io/projected/12291457-a2ba-4bfa-8c21-fdf315e8dc12-kube-api-access-d8km5\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.129079 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12291457-a2ba-4bfa-8c21-fdf315e8dc12-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.252753 4877 generic.go:334] "Generic (PLEG): container finished" podID="12291457-a2ba-4bfa-8c21-fdf315e8dc12" containerID="cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355" exitCode=137 Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.252787 4877 generic.go:334] "Generic (PLEG): container finished" podID="12291457-a2ba-4bfa-8c21-fdf315e8dc12" containerID="a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f" exitCode=137 Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.252846 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79c649cbcc-97rb4" event={"ID":"12291457-a2ba-4bfa-8c21-fdf315e8dc12","Type":"ContainerDied","Data":"cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355"} Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.252889 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79c649cbcc-97rb4" event={"ID":"12291457-a2ba-4bfa-8c21-fdf315e8dc12","Type":"ContainerDied","Data":"a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f"} Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.252916 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79c649cbcc-97rb4" event={"ID":"12291457-a2ba-4bfa-8c21-fdf315e8dc12","Type":"ContainerDied","Data":"b7230d34d29cf4f72ee46b1eb9f4ddcfee64dad7bcd6a03b61b003985f2f678c"} Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.252940 4877 scope.go:117] "RemoveContainer" containerID="cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.253056 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79c649cbcc-97rb4" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.286745 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f986c9df4-vbvbf" event={"ID":"a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03","Type":"ContainerStarted","Data":"f587fb85fd9d045e74c5113f02852deb47d4a0d2b193a418fe843f2b4e72db1a"} Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.287544 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.287984 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.320409 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79c649cbcc-97rb4"] Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.342726 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-79c649cbcc-97rb4"] Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.370137 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5f986c9df4-vbvbf" podStartSLOduration=3.370109052 podStartE2EDuration="3.370109052s" podCreationTimestamp="2025-12-11 18:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:18:46.352918086 +0000 UTC m=+1087.379162150" watchObservedRunningTime="2025-12-11 18:18:46.370109052 +0000 UTC m=+1087.396353096" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.565885 4877 scope.go:117] "RemoveContainer" containerID="a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.597449 4877 scope.go:117] "RemoveContainer" containerID="cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355" Dec 11 18:18:46 crc kubenswrapper[4877]: E1211 18:18:46.598915 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355\": container with ID starting with cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355 not found: ID does not exist" containerID="cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.598949 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355"} err="failed to get container status \"cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355\": rpc error: code = NotFound desc = could not find container \"cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355\": container with ID starting with cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355 not found: ID does not exist" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.598976 4877 scope.go:117] "RemoveContainer" containerID="a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f" Dec 11 18:18:46 crc kubenswrapper[4877]: E1211 18:18:46.599739 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f\": container with ID starting with a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f not found: ID does not exist" containerID="a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.599772 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f"} err="failed to get container status \"a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f\": rpc error: code = NotFound desc = could not find container \"a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f\": container with ID starting with a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f not found: ID does not exist" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.599788 4877 scope.go:117] "RemoveContainer" containerID="cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.600174 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355"} err="failed to get container status \"cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355\": rpc error: code = NotFound desc = could not find container \"cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355\": container with ID starting with cf70b7913f7071c56178767212f1f2be2df47adec024b9ebeb736c8d466ab355 not found: ID does not exist" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.600197 4877 scope.go:117] "RemoveContainer" containerID="a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f" Dec 11 18:18:46 crc kubenswrapper[4877]: I1211 18:18:46.600586 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f"} err="failed to get container status \"a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f\": rpc error: code = NotFound desc = could not find container \"a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f\": container with ID starting with a30a08f0e31db5ebe89b28df1e5200e5b507e1ba863792c3c966005de0ac221f not found: ID does not exist" Dec 11 18:18:47 crc kubenswrapper[4877]: I1211 18:18:47.235390 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12291457-a2ba-4bfa-8c21-fdf315e8dc12" path="/var/lib/kubelet/pods/12291457-a2ba-4bfa-8c21-fdf315e8dc12/volumes" Dec 11 18:18:47 crc kubenswrapper[4877]: E1211 18:18:47.532038 4877 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45886494_4c47_4ebd_8531_4895a7f7a2ed.slice/crio-conmon-a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45886494_4c47_4ebd_8531_4895a7f7a2ed.slice/crio-a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4.scope\": RecentStats: unable to find data in memory cache]" Dec 11 18:18:47 crc kubenswrapper[4877]: I1211 18:18:47.801089 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:47 crc kubenswrapper[4877]: I1211 18:18:47.844211 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:47 crc kubenswrapper[4877]: I1211 18:18:47.919876 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 18:18:47 crc kubenswrapper[4877]: I1211 18:18:47.983581 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.077583 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.079427 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-httpd-config\") pod \"45886494-4c47-4ebd-8531-4895a7f7a2ed\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.079574 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-combined-ca-bundle\") pod \"45886494-4c47-4ebd-8531-4895a7f7a2ed\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.080107 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-config\") pod \"45886494-4c47-4ebd-8531-4895a7f7a2ed\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.080387 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5slmn\" (UniqueName: \"kubernetes.io/projected/45886494-4c47-4ebd-8531-4895a7f7a2ed-kube-api-access-5slmn\") pod \"45886494-4c47-4ebd-8531-4895a7f7a2ed\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.080512 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-ovndb-tls-certs\") pod \"45886494-4c47-4ebd-8531-4895a7f7a2ed\" (UID: \"45886494-4c47-4ebd-8531-4895a7f7a2ed\") " Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.099694 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45886494-4c47-4ebd-8531-4895a7f7a2ed-kube-api-access-5slmn" (OuterVolumeSpecName: "kube-api-access-5slmn") pod "45886494-4c47-4ebd-8531-4895a7f7a2ed" (UID: "45886494-4c47-4ebd-8531-4895a7f7a2ed"). InnerVolumeSpecName "kube-api-access-5slmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.107595 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "45886494-4c47-4ebd-8531-4895a7f7a2ed" (UID: "45886494-4c47-4ebd-8531-4895a7f7a2ed"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.184225 4877 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.184261 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5slmn\" (UniqueName: \"kubernetes.io/projected/45886494-4c47-4ebd-8531-4895a7f7a2ed-kube-api-access-5slmn\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.227743 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-882nl"] Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.228021 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-882nl" podUID="c46cd811-f91f-48b8-aa23-f916227e65d8" containerName="dnsmasq-dns" containerID="cri-o://8dea6a1cfd9a949838411a0675a299bada7c60c3d4f2123d4bbc1e8f2ddcabb7" gracePeriod=10 Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.247480 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-config" (OuterVolumeSpecName: "config") pod "45886494-4c47-4ebd-8531-4895a7f7a2ed" (UID: "45886494-4c47-4ebd-8531-4895a7f7a2ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.253562 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45886494-4c47-4ebd-8531-4895a7f7a2ed" (UID: "45886494-4c47-4ebd-8531-4895a7f7a2ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.287038 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.287070 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.288138 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.318573 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "45886494-4c47-4ebd-8531-4895a7f7a2ed" (UID: "45886494-4c47-4ebd-8531-4895a7f7a2ed"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.350483 4877 generic.go:334] "Generic (PLEG): container finished" podID="45886494-4c47-4ebd-8531-4895a7f7a2ed" containerID="a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4" exitCode=0 Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.350606 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66dd954c9d-qfqj2" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.351420 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66dd954c9d-qfqj2" event={"ID":"45886494-4c47-4ebd-8531-4895a7f7a2ed","Type":"ContainerDied","Data":"a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4"} Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.351489 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66dd954c9d-qfqj2" event={"ID":"45886494-4c47-4ebd-8531-4895a7f7a2ed","Type":"ContainerDied","Data":"de000a49d03bffca2ee444ec205bd64db2011bdcf2f1f1d24f2d5ddb9b130a99"} Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.351510 4877 scope.go:117] "RemoveContainer" containerID="c29e682b5666dff5e2d8b0797fc619b82551dce679dfd0c767481fc702e2908a" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.389510 4877 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45886494-4c47-4ebd-8531-4895a7f7a2ed-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.420157 4877 scope.go:117] "RemoveContainer" containerID="a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.456468 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.468291 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66dd954c9d-qfqj2"] Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.477939 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-66dd954c9d-qfqj2"] Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.514574 4877 scope.go:117] "RemoveContainer" containerID="c29e682b5666dff5e2d8b0797fc619b82551dce679dfd0c767481fc702e2908a" Dec 11 18:18:48 crc kubenswrapper[4877]: E1211 18:18:48.518489 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c29e682b5666dff5e2d8b0797fc619b82551dce679dfd0c767481fc702e2908a\": container with ID starting with c29e682b5666dff5e2d8b0797fc619b82551dce679dfd0c767481fc702e2908a not found: ID does not exist" containerID="c29e682b5666dff5e2d8b0797fc619b82551dce679dfd0c767481fc702e2908a" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.518524 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29e682b5666dff5e2d8b0797fc619b82551dce679dfd0c767481fc702e2908a"} err="failed to get container status \"c29e682b5666dff5e2d8b0797fc619b82551dce679dfd0c767481fc702e2908a\": rpc error: code = NotFound desc = could not find container \"c29e682b5666dff5e2d8b0797fc619b82551dce679dfd0c767481fc702e2908a\": container with ID starting with c29e682b5666dff5e2d8b0797fc619b82551dce679dfd0c767481fc702e2908a not found: ID does not exist" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.518551 4877 scope.go:117] "RemoveContainer" containerID="a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4" Dec 11 18:18:48 crc kubenswrapper[4877]: E1211 18:18:48.518906 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4\": container with ID starting with a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4 not found: ID does not exist" containerID="a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.518952 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4"} err="failed to get container status \"a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4\": rpc error: code = NotFound desc = could not find container \"a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4\": container with ID starting with a10ab0081af62057d2f600c9a22b6d42faa500ad6b8be5ad9082d427f7bbe6e4 not found: ID does not exist" Dec 11 18:18:48 crc kubenswrapper[4877]: I1211 18:18:48.936440 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.006286 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-config\") pod \"c46cd811-f91f-48b8-aa23-f916227e65d8\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.006657 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-ovsdbserver-sb\") pod \"c46cd811-f91f-48b8-aa23-f916227e65d8\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.006810 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-dns-svc\") pod \"c46cd811-f91f-48b8-aa23-f916227e65d8\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.006888 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhqcn\" (UniqueName: \"kubernetes.io/projected/c46cd811-f91f-48b8-aa23-f916227e65d8-kube-api-access-fhqcn\") pod \"c46cd811-f91f-48b8-aa23-f916227e65d8\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.007026 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-ovsdbserver-nb\") pod \"c46cd811-f91f-48b8-aa23-f916227e65d8\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.007207 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-dns-swift-storage-0\") pod \"c46cd811-f91f-48b8-aa23-f916227e65d8\" (UID: \"c46cd811-f91f-48b8-aa23-f916227e65d8\") " Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.033797 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c46cd811-f91f-48b8-aa23-f916227e65d8-kube-api-access-fhqcn" (OuterVolumeSpecName: "kube-api-access-fhqcn") pod "c46cd811-f91f-48b8-aa23-f916227e65d8" (UID: "c46cd811-f91f-48b8-aa23-f916227e65d8"). InnerVolumeSpecName "kube-api-access-fhqcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.071284 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c46cd811-f91f-48b8-aa23-f916227e65d8" (UID: "c46cd811-f91f-48b8-aa23-f916227e65d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.079970 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c46cd811-f91f-48b8-aa23-f916227e65d8" (UID: "c46cd811-f91f-48b8-aa23-f916227e65d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.081142 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-config" (OuterVolumeSpecName: "config") pod "c46cd811-f91f-48b8-aa23-f916227e65d8" (UID: "c46cd811-f91f-48b8-aa23-f916227e65d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.087051 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c46cd811-f91f-48b8-aa23-f916227e65d8" (UID: "c46cd811-f91f-48b8-aa23-f916227e65d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.110679 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.111176 4877 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.111939 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhqcn\" (UniqueName: \"kubernetes.io/projected/c46cd811-f91f-48b8-aa23-f916227e65d8-kube-api-access-fhqcn\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.113976 4877 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.114124 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.150836 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c46cd811-f91f-48b8-aa23-f916227e65d8" (UID: "c46cd811-f91f-48b8-aa23-f916227e65d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.215790 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c46cd811-f91f-48b8-aa23-f916227e65d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.253064 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45886494-4c47-4ebd-8531-4895a7f7a2ed" path="/var/lib/kubelet/pods/45886494-4c47-4ebd-8531-4895a7f7a2ed/volumes" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.378964 4877 generic.go:334] "Generic (PLEG): container finished" podID="c46cd811-f91f-48b8-aa23-f916227e65d8" containerID="8dea6a1cfd9a949838411a0675a299bada7c60c3d4f2123d4bbc1e8f2ddcabb7" exitCode=0 Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.379247 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c4747402-a420-4cd2-84eb-6775467f2db5" containerName="cinder-scheduler" containerID="cri-o://6410f0c0ffd885c4b82c14dd717e532d8f40771b1e0bcda024b48b722bc50ed1" gracePeriod=30 Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.379390 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-882nl" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.379649 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-882nl" event={"ID":"c46cd811-f91f-48b8-aa23-f916227e65d8","Type":"ContainerDied","Data":"8dea6a1cfd9a949838411a0675a299bada7c60c3d4f2123d4bbc1e8f2ddcabb7"} Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.379724 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-882nl" event={"ID":"c46cd811-f91f-48b8-aa23-f916227e65d8","Type":"ContainerDied","Data":"4ff50c4fbcdd5f85a37974dd5d8eefc1015f88d2ea1403c13a9fe5385ec0b50b"} Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.379749 4877 scope.go:117] "RemoveContainer" containerID="8dea6a1cfd9a949838411a0675a299bada7c60c3d4f2123d4bbc1e8f2ddcabb7" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.379912 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c4747402-a420-4cd2-84eb-6775467f2db5" containerName="probe" containerID="cri-o://31db71442d2051d799f8df1a027e595a0e90abd9575aecc694719d3a79283176" gracePeriod=30 Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.411912 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-882nl"] Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.417560 4877 scope.go:117] "RemoveContainer" containerID="93827d1982838564e062945b1db65ec3a05644f63b3d418eee0db12ed18199d6" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.421235 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-882nl"] Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.523174 4877 scope.go:117] "RemoveContainer" containerID="8dea6a1cfd9a949838411a0675a299bada7c60c3d4f2123d4bbc1e8f2ddcabb7" Dec 11 18:18:49 crc kubenswrapper[4877]: E1211 18:18:49.523998 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dea6a1cfd9a949838411a0675a299bada7c60c3d4f2123d4bbc1e8f2ddcabb7\": container with ID starting with 8dea6a1cfd9a949838411a0675a299bada7c60c3d4f2123d4bbc1e8f2ddcabb7 not found: ID does not exist" containerID="8dea6a1cfd9a949838411a0675a299bada7c60c3d4f2123d4bbc1e8f2ddcabb7" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.524044 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dea6a1cfd9a949838411a0675a299bada7c60c3d4f2123d4bbc1e8f2ddcabb7"} err="failed to get container status \"8dea6a1cfd9a949838411a0675a299bada7c60c3d4f2123d4bbc1e8f2ddcabb7\": rpc error: code = NotFound desc = could not find container \"8dea6a1cfd9a949838411a0675a299bada7c60c3d4f2123d4bbc1e8f2ddcabb7\": container with ID starting with 8dea6a1cfd9a949838411a0675a299bada7c60c3d4f2123d4bbc1e8f2ddcabb7 not found: ID does not exist" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.524077 4877 scope.go:117] "RemoveContainer" containerID="93827d1982838564e062945b1db65ec3a05644f63b3d418eee0db12ed18199d6" Dec 11 18:18:49 crc kubenswrapper[4877]: E1211 18:18:49.524332 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93827d1982838564e062945b1db65ec3a05644f63b3d418eee0db12ed18199d6\": container with ID starting with 93827d1982838564e062945b1db65ec3a05644f63b3d418eee0db12ed18199d6 not found: ID does not exist" containerID="93827d1982838564e062945b1db65ec3a05644f63b3d418eee0db12ed18199d6" Dec 11 18:18:49 crc kubenswrapper[4877]: I1211 18:18:49.524395 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93827d1982838564e062945b1db65ec3a05644f63b3d418eee0db12ed18199d6"} err="failed to get container status \"93827d1982838564e062945b1db65ec3a05644f63b3d418eee0db12ed18199d6\": rpc error: code = NotFound desc = could not find container \"93827d1982838564e062945b1db65ec3a05644f63b3d418eee0db12ed18199d6\": container with ID starting with 93827d1982838564e062945b1db65ec3a05644f63b3d418eee0db12ed18199d6 not found: ID does not exist" Dec 11 18:18:50 crc kubenswrapper[4877]: I1211 18:18:50.617976 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:50 crc kubenswrapper[4877]: I1211 18:18:50.826259 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-77d988cd48-2h828" Dec 11 18:18:51 crc kubenswrapper[4877]: I1211 18:18:51.229565 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c46cd811-f91f-48b8-aa23-f916227e65d8" path="/var/lib/kubelet/pods/c46cd811-f91f-48b8-aa23-f916227e65d8/volumes" Dec 11 18:18:51 crc kubenswrapper[4877]: I1211 18:18:51.413770 4877 generic.go:334] "Generic (PLEG): container finished" podID="c4747402-a420-4cd2-84eb-6775467f2db5" containerID="31db71442d2051d799f8df1a027e595a0e90abd9575aecc694719d3a79283176" exitCode=0 Dec 11 18:18:51 crc kubenswrapper[4877]: I1211 18:18:51.415796 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4747402-a420-4cd2-84eb-6775467f2db5","Type":"ContainerDied","Data":"31db71442d2051d799f8df1a027e595a0e90abd9575aecc694719d3a79283176"} Dec 11 18:18:52 crc kubenswrapper[4877]: I1211 18:18:52.676568 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:52 crc kubenswrapper[4877]: I1211 18:18:52.881563 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:54 crc kubenswrapper[4877]: I1211 18:18:54.460328 4877 generic.go:334] "Generic (PLEG): container finished" podID="c4747402-a420-4cd2-84eb-6775467f2db5" containerID="6410f0c0ffd885c4b82c14dd717e532d8f40771b1e0bcda024b48b722bc50ed1" exitCode=0 Dec 11 18:18:54 crc kubenswrapper[4877]: I1211 18:18:54.460402 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4747402-a420-4cd2-84eb-6775467f2db5","Type":"ContainerDied","Data":"6410f0c0ffd885c4b82c14dd717e532d8f40771b1e0bcda024b48b722bc50ed1"} Dec 11 18:18:55 crc kubenswrapper[4877]: I1211 18:18:55.017293 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:18:55 crc kubenswrapper[4877]: I1211 18:18:55.179699 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-c9dbfd97b-ck4jv" Dec 11 18:18:55 crc kubenswrapper[4877]: I1211 18:18:55.269807 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5757846754-b6r7j"] Dec 11 18:18:55 crc kubenswrapper[4877]: I1211 18:18:55.467321 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:55 crc kubenswrapper[4877]: I1211 18:18:55.471171 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5757846754-b6r7j" podUID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerName="horizon-log" containerID="cri-o://0cd4e0aa09f476a21016a9718fd1e71baff6f63888026e4059bb942c93702405" gracePeriod=30 Dec 11 18:18:55 crc kubenswrapper[4877]: I1211 18:18:55.471268 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5757846754-b6r7j" podUID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerName="horizon" containerID="cri-o://8e764b5cee6f743b749ace4261633961a405255c20519fae3935d10c8dd58b51" gracePeriod=30 Dec 11 18:18:55 crc kubenswrapper[4877]: I1211 18:18:55.769488 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f986c9df4-vbvbf" Dec 11 18:18:55 crc kubenswrapper[4877]: I1211 18:18:55.875849 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85566dfdb6-gtbzq"] Dec 11 18:18:55 crc kubenswrapper[4877]: I1211 18:18:55.876125 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-85566dfdb6-gtbzq" podUID="33d11af1-3fbc-4e51-a965-c1e1e3dbc853" containerName="barbican-api-log" containerID="cri-o://47771ab0875eefa0999d9a495880c844d0b59d45be0b863dffe8bb1f61eb72f4" gracePeriod=30 Dec 11 18:18:55 crc kubenswrapper[4877]: I1211 18:18:55.876590 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-85566dfdb6-gtbzq" podUID="33d11af1-3fbc-4e51-a965-c1e1e3dbc853" containerName="barbican-api" containerID="cri-o://e178a27a915999436077fc4241b255d6ee6e89b7673a67fc84465a5d9e133666" gracePeriod=30 Dec 11 18:18:56 crc kubenswrapper[4877]: I1211 18:18:56.486323 4877 generic.go:334] "Generic (PLEG): container finished" podID="33d11af1-3fbc-4e51-a965-c1e1e3dbc853" containerID="47771ab0875eefa0999d9a495880c844d0b59d45be0b863dffe8bb1f61eb72f4" exitCode=143 Dec 11 18:18:56 crc kubenswrapper[4877]: I1211 18:18:56.486428 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85566dfdb6-gtbzq" event={"ID":"33d11af1-3fbc-4e51-a965-c1e1e3dbc853","Type":"ContainerDied","Data":"47771ab0875eefa0999d9a495880c844d0b59d45be0b863dffe8bb1f61eb72f4"} Dec 11 18:18:58 crc kubenswrapper[4877]: E1211 18:18:58.236035 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="49332496-5e7e-426e-9d51-aee9479d8a0d" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.247291 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.422075 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4747402-a420-4cd2-84eb-6775467f2db5-etc-machine-id\") pod \"c4747402-a420-4cd2-84eb-6775467f2db5\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.422317 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-config-data\") pod \"c4747402-a420-4cd2-84eb-6775467f2db5\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.422397 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-config-data-custom\") pod \"c4747402-a420-4cd2-84eb-6775467f2db5\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.422474 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-scripts\") pod \"c4747402-a420-4cd2-84eb-6775467f2db5\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.422622 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-combined-ca-bundle\") pod \"c4747402-a420-4cd2-84eb-6775467f2db5\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.422670 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsjb8\" (UniqueName: \"kubernetes.io/projected/c4747402-a420-4cd2-84eb-6775467f2db5-kube-api-access-xsjb8\") pod \"c4747402-a420-4cd2-84eb-6775467f2db5\" (UID: \"c4747402-a420-4cd2-84eb-6775467f2db5\") " Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.423849 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4747402-a420-4cd2-84eb-6775467f2db5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c4747402-a420-4cd2-84eb-6775467f2db5" (UID: "c4747402-a420-4cd2-84eb-6775467f2db5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.430499 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-scripts" (OuterVolumeSpecName: "scripts") pod "c4747402-a420-4cd2-84eb-6775467f2db5" (UID: "c4747402-a420-4cd2-84eb-6775467f2db5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.430571 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c4747402-a420-4cd2-84eb-6775467f2db5" (UID: "c4747402-a420-4cd2-84eb-6775467f2db5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.430668 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4747402-a420-4cd2-84eb-6775467f2db5-kube-api-access-xsjb8" (OuterVolumeSpecName: "kube-api-access-xsjb8") pod "c4747402-a420-4cd2-84eb-6775467f2db5" (UID: "c4747402-a420-4cd2-84eb-6775467f2db5"). InnerVolumeSpecName "kube-api-access-xsjb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.496297 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4747402-a420-4cd2-84eb-6775467f2db5" (UID: "c4747402-a420-4cd2-84eb-6775467f2db5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.518863 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4747402-a420-4cd2-84eb-6775467f2db5","Type":"ContainerDied","Data":"aeb4ab1aba4e7653927043174fe9bd2b71aa2778f4d5ae6be9e20affb1511837"} Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.518938 4877 scope.go:117] "RemoveContainer" containerID="31db71442d2051d799f8df1a027e595a0e90abd9575aecc694719d3a79283176" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.519129 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.525156 4877 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.525206 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.525217 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.525226 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsjb8\" (UniqueName: \"kubernetes.io/projected/c4747402-a420-4cd2-84eb-6775467f2db5-kube-api-access-xsjb8\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.525240 4877 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4747402-a420-4cd2-84eb-6775467f2db5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.551767 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49332496-5e7e-426e-9d51-aee9479d8a0d","Type":"ContainerStarted","Data":"bde5a4c4e89ad01a7436ce208a677d23eb720c8ee8013ffb55269249afc60f79"} Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.552597 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerName="ceilometer-notification-agent" containerID="cri-o://b3cabce23b6405774e75385d93ba30725ee92ec251ed7625eb1fc231354a651d" gracePeriod=30 Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.552747 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.553554 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerName="proxy-httpd" containerID="cri-o://bde5a4c4e89ad01a7436ce208a677d23eb720c8ee8013ffb55269249afc60f79" gracePeriod=30 Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.553638 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerName="sg-core" containerID="cri-o://1d26658b41305506b3ae2332489c8b60b1ef8b3b31db1dd61617d8164552e29a" gracePeriod=30 Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.578019 4877 scope.go:117] "RemoveContainer" containerID="6410f0c0ffd885c4b82c14dd717e532d8f40771b1e0bcda024b48b722bc50ed1" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.598263 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-config-data" (OuterVolumeSpecName: "config-data") pod "c4747402-a420-4cd2-84eb-6775467f2db5" (UID: "c4747402-a420-4cd2-84eb-6775467f2db5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.627269 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4747402-a420-4cd2-84eb-6775467f2db5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.864187 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.878343 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.901461 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 18:18:58 crc kubenswrapper[4877]: E1211 18:18:58.901853 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4747402-a420-4cd2-84eb-6775467f2db5" containerName="cinder-scheduler" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.901874 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4747402-a420-4cd2-84eb-6775467f2db5" containerName="cinder-scheduler" Dec 11 18:18:58 crc kubenswrapper[4877]: E1211 18:18:58.901892 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c46cd811-f91f-48b8-aa23-f916227e65d8" containerName="init" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.901899 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="c46cd811-f91f-48b8-aa23-f916227e65d8" containerName="init" Dec 11 18:18:58 crc kubenswrapper[4877]: E1211 18:18:58.901912 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1776d1d-2543-4923-8d58-08610435d2fe" containerName="cinder-api-log" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.901918 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1776d1d-2543-4923-8d58-08610435d2fe" containerName="cinder-api-log" Dec 11 18:18:58 crc kubenswrapper[4877]: E1211 18:18:58.901930 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c46cd811-f91f-48b8-aa23-f916227e65d8" containerName="dnsmasq-dns" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.901937 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="c46cd811-f91f-48b8-aa23-f916227e65d8" containerName="dnsmasq-dns" Dec 11 18:18:58 crc kubenswrapper[4877]: E1211 18:18:58.901964 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1776d1d-2543-4923-8d58-08610435d2fe" containerName="cinder-api" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.901970 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1776d1d-2543-4923-8d58-08610435d2fe" containerName="cinder-api" Dec 11 18:18:58 crc kubenswrapper[4877]: E1211 18:18:58.901984 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12291457-a2ba-4bfa-8c21-fdf315e8dc12" containerName="horizon-log" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.901989 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="12291457-a2ba-4bfa-8c21-fdf315e8dc12" containerName="horizon-log" Dec 11 18:18:58 crc kubenswrapper[4877]: E1211 18:18:58.901997 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45886494-4c47-4ebd-8531-4895a7f7a2ed" containerName="neutron-httpd" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.902002 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="45886494-4c47-4ebd-8531-4895a7f7a2ed" containerName="neutron-httpd" Dec 11 18:18:58 crc kubenswrapper[4877]: E1211 18:18:58.902013 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45886494-4c47-4ebd-8531-4895a7f7a2ed" containerName="neutron-api" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.902018 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="45886494-4c47-4ebd-8531-4895a7f7a2ed" containerName="neutron-api" Dec 11 18:18:58 crc kubenswrapper[4877]: E1211 18:18:58.902030 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4747402-a420-4cd2-84eb-6775467f2db5" containerName="probe" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.902038 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4747402-a420-4cd2-84eb-6775467f2db5" containerName="probe" Dec 11 18:18:58 crc kubenswrapper[4877]: E1211 18:18:58.902048 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12291457-a2ba-4bfa-8c21-fdf315e8dc12" containerName="horizon" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.902054 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="12291457-a2ba-4bfa-8c21-fdf315e8dc12" containerName="horizon" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.902262 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1776d1d-2543-4923-8d58-08610435d2fe" containerName="cinder-api" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.902276 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="12291457-a2ba-4bfa-8c21-fdf315e8dc12" containerName="horizon" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.902291 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="45886494-4c47-4ebd-8531-4895a7f7a2ed" containerName="neutron-httpd" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.902300 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1776d1d-2543-4923-8d58-08610435d2fe" containerName="cinder-api-log" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.902310 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="c46cd811-f91f-48b8-aa23-f916227e65d8" containerName="dnsmasq-dns" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.902326 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="12291457-a2ba-4bfa-8c21-fdf315e8dc12" containerName="horizon-log" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.902339 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4747402-a420-4cd2-84eb-6775467f2db5" containerName="cinder-scheduler" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.902357 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="45886494-4c47-4ebd-8531-4895a7f7a2ed" containerName="neutron-api" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.902368 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4747402-a420-4cd2-84eb-6775467f2db5" containerName="probe" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.903464 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.909039 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6m4g8" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.909220 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.909215 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.913103 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 11 18:18:58 crc kubenswrapper[4877]: I1211 18:18:58.921011 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.037952 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.038001 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.038036 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkcs2\" (UniqueName: \"kubernetes.io/projected/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-kube-api-access-pkcs2\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.038061 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-scripts\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.038103 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.038209 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-config-data\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.051994 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85566dfdb6-gtbzq" podUID="33d11af1-3fbc-4e51-a965-c1e1e3dbc853" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:37536->10.217.0.160:9311: read: connection reset by peer" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.051987 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85566dfdb6-gtbzq" podUID="33d11af1-3fbc-4e51-a965-c1e1e3dbc853" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:37530->10.217.0.160:9311: read: connection reset by peer" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.139687 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-config-data\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.139744 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.139780 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.139816 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkcs2\" (UniqueName: \"kubernetes.io/projected/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-kube-api-access-pkcs2\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.139843 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-scripts\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.140598 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.140671 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.147116 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.147291 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.153772 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-config-data\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.157745 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-scripts\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.163812 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkcs2\" (UniqueName: \"kubernetes.io/projected/60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8-kube-api-access-pkcs2\") pod \"cinder-scheduler-0\" (UID: \"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8\") " pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.229129 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4747402-a420-4cd2-84eb-6775467f2db5" path="/var/lib/kubelet/pods/c4747402-a420-4cd2-84eb-6775467f2db5/volumes" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.284133 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.517834 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.584497 4877 generic.go:334] "Generic (PLEG): container finished" podID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerID="bde5a4c4e89ad01a7436ce208a677d23eb720c8ee8013ffb55269249afc60f79" exitCode=0 Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.584542 4877 generic.go:334] "Generic (PLEG): container finished" podID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerID="1d26658b41305506b3ae2332489c8b60b1ef8b3b31db1dd61617d8164552e29a" exitCode=2 Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.584598 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49332496-5e7e-426e-9d51-aee9479d8a0d","Type":"ContainerDied","Data":"bde5a4c4e89ad01a7436ce208a677d23eb720c8ee8013ffb55269249afc60f79"} Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.584631 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49332496-5e7e-426e-9d51-aee9479d8a0d","Type":"ContainerDied","Data":"1d26658b41305506b3ae2332489c8b60b1ef8b3b31db1dd61617d8164552e29a"} Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.586611 4877 generic.go:334] "Generic (PLEG): container finished" podID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerID="8e764b5cee6f743b749ace4261633961a405255c20519fae3935d10c8dd58b51" exitCode=0 Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.586653 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5757846754-b6r7j" event={"ID":"7e767786-f0b1-4dae-b7c5-fd1e00046935","Type":"ContainerDied","Data":"8e764b5cee6f743b749ace4261633961a405255c20519fae3935d10c8dd58b51"} Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.588270 4877 generic.go:334] "Generic (PLEG): container finished" podID="33d11af1-3fbc-4e51-a965-c1e1e3dbc853" containerID="e178a27a915999436077fc4241b255d6ee6e89b7673a67fc84465a5d9e133666" exitCode=0 Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.588324 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85566dfdb6-gtbzq" event={"ID":"33d11af1-3fbc-4e51-a965-c1e1e3dbc853","Type":"ContainerDied","Data":"e178a27a915999436077fc4241b255d6ee6e89b7673a67fc84465a5d9e133666"} Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.588342 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85566dfdb6-gtbzq" event={"ID":"33d11af1-3fbc-4e51-a965-c1e1e3dbc853","Type":"ContainerDied","Data":"3540cd211cc7b0dc4d6238400f6f1c963765051e4f3c13247c697b2075c288cb"} Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.588359 4877 scope.go:117] "RemoveContainer" containerID="e178a27a915999436077fc4241b255d6ee6e89b7673a67fc84465a5d9e133666" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.588472 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85566dfdb6-gtbzq" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.632656 4877 scope.go:117] "RemoveContainer" containerID="47771ab0875eefa0999d9a495880c844d0b59d45be0b863dffe8bb1f61eb72f4" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.652566 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-logs\") pod \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.652628 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-config-data-custom\") pod \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.652676 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-config-data\") pod \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.652773 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q8j4\" (UniqueName: \"kubernetes.io/projected/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-kube-api-access-8q8j4\") pod \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.652841 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-combined-ca-bundle\") pod \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\" (UID: \"33d11af1-3fbc-4e51-a965-c1e1e3dbc853\") " Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.653926 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-logs" (OuterVolumeSpecName: "logs") pod "33d11af1-3fbc-4e51-a965-c1e1e3dbc853" (UID: "33d11af1-3fbc-4e51-a965-c1e1e3dbc853"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.662549 4877 scope.go:117] "RemoveContainer" containerID="e178a27a915999436077fc4241b255d6ee6e89b7673a67fc84465a5d9e133666" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.662590 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-kube-api-access-8q8j4" (OuterVolumeSpecName: "kube-api-access-8q8j4") pod "33d11af1-3fbc-4e51-a965-c1e1e3dbc853" (UID: "33d11af1-3fbc-4e51-a965-c1e1e3dbc853"). InnerVolumeSpecName "kube-api-access-8q8j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.663142 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "33d11af1-3fbc-4e51-a965-c1e1e3dbc853" (UID: "33d11af1-3fbc-4e51-a965-c1e1e3dbc853"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:59 crc kubenswrapper[4877]: E1211 18:18:59.663163 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e178a27a915999436077fc4241b255d6ee6e89b7673a67fc84465a5d9e133666\": container with ID starting with e178a27a915999436077fc4241b255d6ee6e89b7673a67fc84465a5d9e133666 not found: ID does not exist" containerID="e178a27a915999436077fc4241b255d6ee6e89b7673a67fc84465a5d9e133666" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.663203 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e178a27a915999436077fc4241b255d6ee6e89b7673a67fc84465a5d9e133666"} err="failed to get container status \"e178a27a915999436077fc4241b255d6ee6e89b7673a67fc84465a5d9e133666\": rpc error: code = NotFound desc = could not find container \"e178a27a915999436077fc4241b255d6ee6e89b7673a67fc84465a5d9e133666\": container with ID starting with e178a27a915999436077fc4241b255d6ee6e89b7673a67fc84465a5d9e133666 not found: ID does not exist" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.663236 4877 scope.go:117] "RemoveContainer" containerID="47771ab0875eefa0999d9a495880c844d0b59d45be0b863dffe8bb1f61eb72f4" Dec 11 18:18:59 crc kubenswrapper[4877]: E1211 18:18:59.663568 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47771ab0875eefa0999d9a495880c844d0b59d45be0b863dffe8bb1f61eb72f4\": container with ID starting with 47771ab0875eefa0999d9a495880c844d0b59d45be0b863dffe8bb1f61eb72f4 not found: ID does not exist" containerID="47771ab0875eefa0999d9a495880c844d0b59d45be0b863dffe8bb1f61eb72f4" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.663593 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47771ab0875eefa0999d9a495880c844d0b59d45be0b863dffe8bb1f61eb72f4"} err="failed to get container status \"47771ab0875eefa0999d9a495880c844d0b59d45be0b863dffe8bb1f61eb72f4\": rpc error: code = NotFound desc = could not find container \"47771ab0875eefa0999d9a495880c844d0b59d45be0b863dffe8bb1f61eb72f4\": container with ID starting with 47771ab0875eefa0999d9a495880c844d0b59d45be0b863dffe8bb1f61eb72f4 not found: ID does not exist" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.691055 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33d11af1-3fbc-4e51-a965-c1e1e3dbc853" (UID: "33d11af1-3fbc-4e51-a965-c1e1e3dbc853"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.716244 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-c7754d7b9-8ngjr" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.717227 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-config-data" (OuterVolumeSpecName: "config-data") pod "33d11af1-3fbc-4e51-a965-c1e1e3dbc853" (UID: "33d11af1-3fbc-4e51-a965-c1e1e3dbc853"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.755660 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.755705 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.755715 4877 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.755724 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.755732 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q8j4\" (UniqueName: \"kubernetes.io/projected/33d11af1-3fbc-4e51-a965-c1e1e3dbc853-kube-api-access-8q8j4\") on node \"crc\" DevicePath \"\"" Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.836434 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6d4d95f94c-wnhwk"] Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.836752 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-6d4d95f94c-wnhwk" podUID="0767ff35-4ddd-4785-8538-3d65f777518d" containerName="keystone-api" containerID="cri-o://6486cc4af38d96fd26c04f8c4f80f944a889e7b0a2f0201d0d124cd314f67ddd" gracePeriod=30 Dec 11 18:18:59 crc kubenswrapper[4877]: I1211 18:18:59.893976 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 18:19:00 crc kubenswrapper[4877]: I1211 18:19:00.073129 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85566dfdb6-gtbzq"] Dec 11 18:19:00 crc kubenswrapper[4877]: I1211 18:19:00.084474 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-85566dfdb6-gtbzq"] Dec 11 18:19:00 crc kubenswrapper[4877]: I1211 18:19:00.620894 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8","Type":"ContainerStarted","Data":"50235df86a4fa3f8a57f0286967c85b88697be8ba91fbfb04724770f69a7c1eb"} Dec 11 18:19:00 crc kubenswrapper[4877]: I1211 18:19:00.664787 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5757846754-b6r7j" podUID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.232844 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33d11af1-3fbc-4e51-a965-c1e1e3dbc853" path="/var/lib/kubelet/pods/33d11af1-3fbc-4e51-a965-c1e1e3dbc853/volumes" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.638104 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8","Type":"ContainerStarted","Data":"54d7b1c3149f7ce03ca86d5be54526e9846321c4e48f61c3c70d22cd22fe5f92"} Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.638179 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8","Type":"ContainerStarted","Data":"e5d64eac65c297e4ac0c552e03c77dd3a3de4882d579fda14a666c9991a2f8b1"} Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.654699 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 11 18:19:01 crc kubenswrapper[4877]: E1211 18:19:01.655822 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d11af1-3fbc-4e51-a965-c1e1e3dbc853" containerName="barbican-api" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.655853 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d11af1-3fbc-4e51-a965-c1e1e3dbc853" containerName="barbican-api" Dec 11 18:19:01 crc kubenswrapper[4877]: E1211 18:19:01.655908 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d11af1-3fbc-4e51-a965-c1e1e3dbc853" containerName="barbican-api-log" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.655919 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d11af1-3fbc-4e51-a965-c1e1e3dbc853" containerName="barbican-api-log" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.656127 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="33d11af1-3fbc-4e51-a965-c1e1e3dbc853" containerName="barbican-api" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.656146 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="33d11af1-3fbc-4e51-a965-c1e1e3dbc853" containerName="barbican-api-log" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.666416 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.672757 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-28pf2" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.673121 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.686046 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.717472 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.737420 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.737396916 podStartE2EDuration="3.737396916s" podCreationTimestamp="2025-12-11 18:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:19:01.708551721 +0000 UTC m=+1102.734795775" watchObservedRunningTime="2025-12-11 18:19:01.737396916 +0000 UTC m=+1102.763640960" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.838854 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhgx6\" (UniqueName: \"kubernetes.io/projected/5d020338-0b95-4228-800f-2e4402139ba9-kube-api-access-jhgx6\") pod \"openstackclient\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " pod="openstack/openstackclient" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.838902 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d020338-0b95-4228-800f-2e4402139ba9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " pod="openstack/openstackclient" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.839046 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d020338-0b95-4228-800f-2e4402139ba9-openstack-config\") pod \"openstackclient\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " pod="openstack/openstackclient" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.839078 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d020338-0b95-4228-800f-2e4402139ba9-openstack-config-secret\") pod \"openstackclient\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " pod="openstack/openstackclient" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.940925 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d020338-0b95-4228-800f-2e4402139ba9-openstack-config\") pod \"openstackclient\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " pod="openstack/openstackclient" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.941001 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d020338-0b95-4228-800f-2e4402139ba9-openstack-config-secret\") pod \"openstackclient\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " pod="openstack/openstackclient" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.941048 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhgx6\" (UniqueName: \"kubernetes.io/projected/5d020338-0b95-4228-800f-2e4402139ba9-kube-api-access-jhgx6\") pod \"openstackclient\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " pod="openstack/openstackclient" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.941072 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d020338-0b95-4228-800f-2e4402139ba9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " pod="openstack/openstackclient" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.943496 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d020338-0b95-4228-800f-2e4402139ba9-openstack-config\") pod \"openstackclient\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " pod="openstack/openstackclient" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.966932 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhgx6\" (UniqueName: \"kubernetes.io/projected/5d020338-0b95-4228-800f-2e4402139ba9-kube-api-access-jhgx6\") pod \"openstackclient\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " pod="openstack/openstackclient" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.968898 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d020338-0b95-4228-800f-2e4402139ba9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " pod="openstack/openstackclient" Dec 11 18:19:01 crc kubenswrapper[4877]: I1211 18:19:01.971174 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d020338-0b95-4228-800f-2e4402139ba9-openstack-config-secret\") pod \"openstackclient\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.056763 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.083974 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.107299 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.162440 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.164073 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.185782 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 18:19:02 crc kubenswrapper[4877]: E1211 18:19:02.254926 4877 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 11 18:19:02 crc kubenswrapper[4877]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_5d020338-0b95-4228-800f-2e4402139ba9_0(d05bb1f0f7f2ad3656ee74201b15621bb9471c6933111bbbeda9ae5dc5915d2b): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d05bb1f0f7f2ad3656ee74201b15621bb9471c6933111bbbeda9ae5dc5915d2b" Netns:"/var/run/netns/bd0c0a1b-51c6-44bb-85c0-96399600a3b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=d05bb1f0f7f2ad3656ee74201b15621bb9471c6933111bbbeda9ae5dc5915d2b;K8S_POD_UID=5d020338-0b95-4228-800f-2e4402139ba9" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/5d020338-0b95-4228-800f-2e4402139ba9]: expected pod UID "5d020338-0b95-4228-800f-2e4402139ba9" but got "22343eeb-fed7-457f-a507-a83d4071ee3a" from Kube API Dec 11 18:19:02 crc kubenswrapper[4877]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 11 18:19:02 crc kubenswrapper[4877]: > Dec 11 18:19:02 crc kubenswrapper[4877]: E1211 18:19:02.255470 4877 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 11 18:19:02 crc kubenswrapper[4877]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_5d020338-0b95-4228-800f-2e4402139ba9_0(d05bb1f0f7f2ad3656ee74201b15621bb9471c6933111bbbeda9ae5dc5915d2b): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d05bb1f0f7f2ad3656ee74201b15621bb9471c6933111bbbeda9ae5dc5915d2b" Netns:"/var/run/netns/bd0c0a1b-51c6-44bb-85c0-96399600a3b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=d05bb1f0f7f2ad3656ee74201b15621bb9471c6933111bbbeda9ae5dc5915d2b;K8S_POD_UID=5d020338-0b95-4228-800f-2e4402139ba9" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/5d020338-0b95-4228-800f-2e4402139ba9]: expected pod UID "5d020338-0b95-4228-800f-2e4402139ba9" but got "22343eeb-fed7-457f-a507-a83d4071ee3a" from Kube API Dec 11 18:19:02 crc kubenswrapper[4877]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 11 18:19:02 crc kubenswrapper[4877]: > pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.255474 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8k78\" (UniqueName: \"kubernetes.io/projected/22343eeb-fed7-457f-a507-a83d4071ee3a-kube-api-access-s8k78\") pod \"openstackclient\" (UID: \"22343eeb-fed7-457f-a507-a83d4071ee3a\") " pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.255610 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22343eeb-fed7-457f-a507-a83d4071ee3a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"22343eeb-fed7-457f-a507-a83d4071ee3a\") " pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.255647 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/22343eeb-fed7-457f-a507-a83d4071ee3a-openstack-config-secret\") pod \"openstackclient\" (UID: \"22343eeb-fed7-457f-a507-a83d4071ee3a\") " pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.255689 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/22343eeb-fed7-457f-a507-a83d4071ee3a-openstack-config\") pod \"openstackclient\" (UID: \"22343eeb-fed7-457f-a507-a83d4071ee3a\") " pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.357544 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8k78\" (UniqueName: \"kubernetes.io/projected/22343eeb-fed7-457f-a507-a83d4071ee3a-kube-api-access-s8k78\") pod \"openstackclient\" (UID: \"22343eeb-fed7-457f-a507-a83d4071ee3a\") " pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.357652 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22343eeb-fed7-457f-a507-a83d4071ee3a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"22343eeb-fed7-457f-a507-a83d4071ee3a\") " pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.357685 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/22343eeb-fed7-457f-a507-a83d4071ee3a-openstack-config-secret\") pod \"openstackclient\" (UID: \"22343eeb-fed7-457f-a507-a83d4071ee3a\") " pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.357718 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/22343eeb-fed7-457f-a507-a83d4071ee3a-openstack-config\") pod \"openstackclient\" (UID: \"22343eeb-fed7-457f-a507-a83d4071ee3a\") " pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.358734 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/22343eeb-fed7-457f-a507-a83d4071ee3a-openstack-config\") pod \"openstackclient\" (UID: \"22343eeb-fed7-457f-a507-a83d4071ee3a\") " pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.362832 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/22343eeb-fed7-457f-a507-a83d4071ee3a-openstack-config-secret\") pod \"openstackclient\" (UID: \"22343eeb-fed7-457f-a507-a83d4071ee3a\") " pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.364132 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22343eeb-fed7-457f-a507-a83d4071ee3a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"22343eeb-fed7-457f-a507-a83d4071ee3a\") " pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.379907 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8k78\" (UniqueName: \"kubernetes.io/projected/22343eeb-fed7-457f-a507-a83d4071ee3a-kube-api-access-s8k78\") pod \"openstackclient\" (UID: \"22343eeb-fed7-457f-a507-a83d4071ee3a\") " pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.530076 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.674661 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.685402 4877 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5d020338-0b95-4228-800f-2e4402139ba9" podUID="22343eeb-fed7-457f-a507-a83d4071ee3a" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.725432 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.768459 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d020338-0b95-4228-800f-2e4402139ba9-openstack-config-secret\") pod \"5d020338-0b95-4228-800f-2e4402139ba9\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.768567 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d020338-0b95-4228-800f-2e4402139ba9-combined-ca-bundle\") pod \"5d020338-0b95-4228-800f-2e4402139ba9\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.768600 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d020338-0b95-4228-800f-2e4402139ba9-openstack-config\") pod \"5d020338-0b95-4228-800f-2e4402139ba9\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.768682 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhgx6\" (UniqueName: \"kubernetes.io/projected/5d020338-0b95-4228-800f-2e4402139ba9-kube-api-access-jhgx6\") pod \"5d020338-0b95-4228-800f-2e4402139ba9\" (UID: \"5d020338-0b95-4228-800f-2e4402139ba9\") " Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.771587 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d020338-0b95-4228-800f-2e4402139ba9-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5d020338-0b95-4228-800f-2e4402139ba9" (UID: "5d020338-0b95-4228-800f-2e4402139ba9"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.778198 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d020338-0b95-4228-800f-2e4402139ba9-kube-api-access-jhgx6" (OuterVolumeSpecName: "kube-api-access-jhgx6") pod "5d020338-0b95-4228-800f-2e4402139ba9" (UID: "5d020338-0b95-4228-800f-2e4402139ba9"). InnerVolumeSpecName "kube-api-access-jhgx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.778237 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d020338-0b95-4228-800f-2e4402139ba9-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5d020338-0b95-4228-800f-2e4402139ba9" (UID: "5d020338-0b95-4228-800f-2e4402139ba9"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.779405 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d020338-0b95-4228-800f-2e4402139ba9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d020338-0b95-4228-800f-2e4402139ba9" (UID: "5d020338-0b95-4228-800f-2e4402139ba9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.871835 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhgx6\" (UniqueName: \"kubernetes.io/projected/5d020338-0b95-4228-800f-2e4402139ba9-kube-api-access-jhgx6\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.871875 4877 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d020338-0b95-4228-800f-2e4402139ba9-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.871890 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d020338-0b95-4228-800f-2e4402139ba9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:02 crc kubenswrapper[4877]: I1211 18:19:02.871902 4877 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d020338-0b95-4228-800f-2e4402139ba9-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:03 crc kubenswrapper[4877]: I1211 18:19:03.106171 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 18:19:03 crc kubenswrapper[4877]: I1211 18:19:03.232041 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d020338-0b95-4228-800f-2e4402139ba9" path="/var/lib/kubelet/pods/5d020338-0b95-4228-800f-2e4402139ba9/volumes" Dec 11 18:19:03 crc kubenswrapper[4877]: I1211 18:19:03.694849 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"22343eeb-fed7-457f-a507-a83d4071ee3a","Type":"ContainerStarted","Data":"857839dddfb0a44595d417ea4be02a8408c1c04c75b50ae2b77a87652d226efc"} Dec 11 18:19:03 crc kubenswrapper[4877]: I1211 18:19:03.699077 4877 generic.go:334] "Generic (PLEG): container finished" podID="0767ff35-4ddd-4785-8538-3d65f777518d" containerID="6486cc4af38d96fd26c04f8c4f80f944a889e7b0a2f0201d0d124cd314f67ddd" exitCode=0 Dec 11 18:19:03 crc kubenswrapper[4877]: I1211 18:19:03.699189 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 18:19:03 crc kubenswrapper[4877]: I1211 18:19:03.699242 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6d4d95f94c-wnhwk" event={"ID":"0767ff35-4ddd-4785-8538-3d65f777518d","Type":"ContainerDied","Data":"6486cc4af38d96fd26c04f8c4f80f944a889e7b0a2f0201d0d124cd314f67ddd"} Dec 11 18:19:03 crc kubenswrapper[4877]: I1211 18:19:03.708504 4877 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5d020338-0b95-4228-800f-2e4402139ba9" podUID="22343eeb-fed7-457f-a507-a83d4071ee3a" Dec 11 18:19:03 crc kubenswrapper[4877]: I1211 18:19:03.984362 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.120587 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-combined-ca-bundle\") pod \"0767ff35-4ddd-4785-8538-3d65f777518d\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.120687 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-scripts\") pod \"0767ff35-4ddd-4785-8538-3d65f777518d\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.120764 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9nqf\" (UniqueName: \"kubernetes.io/projected/0767ff35-4ddd-4785-8538-3d65f777518d-kube-api-access-s9nqf\") pod \"0767ff35-4ddd-4785-8538-3d65f777518d\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.120997 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-config-data\") pod \"0767ff35-4ddd-4785-8538-3d65f777518d\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.121070 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-fernet-keys\") pod \"0767ff35-4ddd-4785-8538-3d65f777518d\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.121120 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-credential-keys\") pod \"0767ff35-4ddd-4785-8538-3d65f777518d\" (UID: \"0767ff35-4ddd-4785-8538-3d65f777518d\") " Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.130860 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-scripts" (OuterVolumeSpecName: "scripts") pod "0767ff35-4ddd-4785-8538-3d65f777518d" (UID: "0767ff35-4ddd-4785-8538-3d65f777518d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.132327 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0767ff35-4ddd-4785-8538-3d65f777518d" (UID: "0767ff35-4ddd-4785-8538-3d65f777518d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.133077 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0767ff35-4ddd-4785-8538-3d65f777518d" (UID: "0767ff35-4ddd-4785-8538-3d65f777518d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.134263 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0767ff35-4ddd-4785-8538-3d65f777518d-kube-api-access-s9nqf" (OuterVolumeSpecName: "kube-api-access-s9nqf") pod "0767ff35-4ddd-4785-8538-3d65f777518d" (UID: "0767ff35-4ddd-4785-8538-3d65f777518d"). InnerVolumeSpecName "kube-api-access-s9nqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.157524 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-config-data" (OuterVolumeSpecName: "config-data") pod "0767ff35-4ddd-4785-8538-3d65f777518d" (UID: "0767ff35-4ddd-4785-8538-3d65f777518d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.166431 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0767ff35-4ddd-4785-8538-3d65f777518d" (UID: "0767ff35-4ddd-4785-8538-3d65f777518d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.223806 4877 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.223846 4877 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.223858 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.223869 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.223878 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9nqf\" (UniqueName: \"kubernetes.io/projected/0767ff35-4ddd-4785-8538-3d65f777518d-kube-api-access-s9nqf\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.223888 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0767ff35-4ddd-4785-8538-3d65f777518d-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.284743 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.619060 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.713104 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6d4d95f94c-wnhwk" event={"ID":"0767ff35-4ddd-4785-8538-3d65f777518d","Type":"ContainerDied","Data":"b5d4c2de6d9dbe752932c217c4effd881a5033371ca8ccf08b93998e0e60739b"} Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.713709 4877 scope.go:117] "RemoveContainer" containerID="6486cc4af38d96fd26c04f8c4f80f944a889e7b0a2f0201d0d124cd314f67ddd" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.713363 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6d4d95f94c-wnhwk" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.716872 4877 generic.go:334] "Generic (PLEG): container finished" podID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerID="b3cabce23b6405774e75385d93ba30725ee92ec251ed7625eb1fc231354a651d" exitCode=0 Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.716907 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49332496-5e7e-426e-9d51-aee9479d8a0d","Type":"ContainerDied","Data":"b3cabce23b6405774e75385d93ba30725ee92ec251ed7625eb1fc231354a651d"} Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.716932 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49332496-5e7e-426e-9d51-aee9479d8a0d","Type":"ContainerDied","Data":"2955b63b1215856318e96a1fc65c62d958bc39cd035f76721125bd3f9a753b28"} Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.718271 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.734314 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49332496-5e7e-426e-9d51-aee9479d8a0d-log-httpd\") pod \"49332496-5e7e-426e-9d51-aee9479d8a0d\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.734432 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-scripts\") pod \"49332496-5e7e-426e-9d51-aee9479d8a0d\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.734476 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-config-data\") pod \"49332496-5e7e-426e-9d51-aee9479d8a0d\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.734667 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49332496-5e7e-426e-9d51-aee9479d8a0d-run-httpd\") pod \"49332496-5e7e-426e-9d51-aee9479d8a0d\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.734697 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-sg-core-conf-yaml\") pod \"49332496-5e7e-426e-9d51-aee9479d8a0d\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.734876 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-combined-ca-bundle\") pod \"49332496-5e7e-426e-9d51-aee9479d8a0d\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.734911 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njj6c\" (UniqueName: \"kubernetes.io/projected/49332496-5e7e-426e-9d51-aee9479d8a0d-kube-api-access-njj6c\") pod \"49332496-5e7e-426e-9d51-aee9479d8a0d\" (UID: \"49332496-5e7e-426e-9d51-aee9479d8a0d\") " Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.736669 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49332496-5e7e-426e-9d51-aee9479d8a0d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "49332496-5e7e-426e-9d51-aee9479d8a0d" (UID: "49332496-5e7e-426e-9d51-aee9479d8a0d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.744621 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-scripts" (OuterVolumeSpecName: "scripts") pod "49332496-5e7e-426e-9d51-aee9479d8a0d" (UID: "49332496-5e7e-426e-9d51-aee9479d8a0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.745323 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49332496-5e7e-426e-9d51-aee9479d8a0d-kube-api-access-njj6c" (OuterVolumeSpecName: "kube-api-access-njj6c") pod "49332496-5e7e-426e-9d51-aee9479d8a0d" (UID: "49332496-5e7e-426e-9d51-aee9479d8a0d"). InnerVolumeSpecName "kube-api-access-njj6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.756255 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49332496-5e7e-426e-9d51-aee9479d8a0d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "49332496-5e7e-426e-9d51-aee9479d8a0d" (UID: "49332496-5e7e-426e-9d51-aee9479d8a0d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.766888 4877 scope.go:117] "RemoveContainer" containerID="bde5a4c4e89ad01a7436ce208a677d23eb720c8ee8013ffb55269249afc60f79" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.770713 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6d4d95f94c-wnhwk"] Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.786075 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6d4d95f94c-wnhwk"] Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.795170 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "49332496-5e7e-426e-9d51-aee9479d8a0d" (UID: "49332496-5e7e-426e-9d51-aee9479d8a0d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.802521 4877 scope.go:117] "RemoveContainer" containerID="1d26658b41305506b3ae2332489c8b60b1ef8b3b31db1dd61617d8164552e29a" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.825469 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49332496-5e7e-426e-9d51-aee9479d8a0d" (UID: "49332496-5e7e-426e-9d51-aee9479d8a0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.843209 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.843264 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njj6c\" (UniqueName: \"kubernetes.io/projected/49332496-5e7e-426e-9d51-aee9479d8a0d-kube-api-access-njj6c\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.843278 4877 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49332496-5e7e-426e-9d51-aee9479d8a0d-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.843291 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.843302 4877 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49332496-5e7e-426e-9d51-aee9479d8a0d-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.843313 4877 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.843564 4877 scope.go:117] "RemoveContainer" containerID="b3cabce23b6405774e75385d93ba30725ee92ec251ed7625eb1fc231354a651d" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.856446 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-config-data" (OuterVolumeSpecName: "config-data") pod "49332496-5e7e-426e-9d51-aee9479d8a0d" (UID: "49332496-5e7e-426e-9d51-aee9479d8a0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.877697 4877 scope.go:117] "RemoveContainer" containerID="bde5a4c4e89ad01a7436ce208a677d23eb720c8ee8013ffb55269249afc60f79" Dec 11 18:19:04 crc kubenswrapper[4877]: E1211 18:19:04.878279 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde5a4c4e89ad01a7436ce208a677d23eb720c8ee8013ffb55269249afc60f79\": container with ID starting with bde5a4c4e89ad01a7436ce208a677d23eb720c8ee8013ffb55269249afc60f79 not found: ID does not exist" containerID="bde5a4c4e89ad01a7436ce208a677d23eb720c8ee8013ffb55269249afc60f79" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.878316 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde5a4c4e89ad01a7436ce208a677d23eb720c8ee8013ffb55269249afc60f79"} err="failed to get container status \"bde5a4c4e89ad01a7436ce208a677d23eb720c8ee8013ffb55269249afc60f79\": rpc error: code = NotFound desc = could not find container \"bde5a4c4e89ad01a7436ce208a677d23eb720c8ee8013ffb55269249afc60f79\": container with ID starting with bde5a4c4e89ad01a7436ce208a677d23eb720c8ee8013ffb55269249afc60f79 not found: ID does not exist" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.878343 4877 scope.go:117] "RemoveContainer" containerID="1d26658b41305506b3ae2332489c8b60b1ef8b3b31db1dd61617d8164552e29a" Dec 11 18:19:04 crc kubenswrapper[4877]: E1211 18:19:04.879508 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d26658b41305506b3ae2332489c8b60b1ef8b3b31db1dd61617d8164552e29a\": container with ID starting with 1d26658b41305506b3ae2332489c8b60b1ef8b3b31db1dd61617d8164552e29a not found: ID does not exist" containerID="1d26658b41305506b3ae2332489c8b60b1ef8b3b31db1dd61617d8164552e29a" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.879539 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d26658b41305506b3ae2332489c8b60b1ef8b3b31db1dd61617d8164552e29a"} err="failed to get container status \"1d26658b41305506b3ae2332489c8b60b1ef8b3b31db1dd61617d8164552e29a\": rpc error: code = NotFound desc = could not find container \"1d26658b41305506b3ae2332489c8b60b1ef8b3b31db1dd61617d8164552e29a\": container with ID starting with 1d26658b41305506b3ae2332489c8b60b1ef8b3b31db1dd61617d8164552e29a not found: ID does not exist" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.879556 4877 scope.go:117] "RemoveContainer" containerID="b3cabce23b6405774e75385d93ba30725ee92ec251ed7625eb1fc231354a651d" Dec 11 18:19:04 crc kubenswrapper[4877]: E1211 18:19:04.886955 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3cabce23b6405774e75385d93ba30725ee92ec251ed7625eb1fc231354a651d\": container with ID starting with b3cabce23b6405774e75385d93ba30725ee92ec251ed7625eb1fc231354a651d not found: ID does not exist" containerID="b3cabce23b6405774e75385d93ba30725ee92ec251ed7625eb1fc231354a651d" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.887028 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3cabce23b6405774e75385d93ba30725ee92ec251ed7625eb1fc231354a651d"} err="failed to get container status \"b3cabce23b6405774e75385d93ba30725ee92ec251ed7625eb1fc231354a651d\": rpc error: code = NotFound desc = could not find container \"b3cabce23b6405774e75385d93ba30725ee92ec251ed7625eb1fc231354a651d\": container with ID starting with b3cabce23b6405774e75385d93ba30725ee92ec251ed7625eb1fc231354a651d not found: ID does not exist" Dec 11 18:19:04 crc kubenswrapper[4877]: I1211 18:19:04.946066 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49332496-5e7e-426e-9d51-aee9479d8a0d-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.110620 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.154825 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.171216 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:05 crc kubenswrapper[4877]: E1211 18:19:05.178420 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerName="sg-core" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.178458 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerName="sg-core" Dec 11 18:19:05 crc kubenswrapper[4877]: E1211 18:19:05.178475 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerName="proxy-httpd" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.178481 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerName="proxy-httpd" Dec 11 18:19:05 crc kubenswrapper[4877]: E1211 18:19:05.178502 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0767ff35-4ddd-4785-8538-3d65f777518d" containerName="keystone-api" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.178513 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="0767ff35-4ddd-4785-8538-3d65f777518d" containerName="keystone-api" Dec 11 18:19:05 crc kubenswrapper[4877]: E1211 18:19:05.178535 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerName="ceilometer-notification-agent" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.178542 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerName="ceilometer-notification-agent" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.178768 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerName="proxy-httpd" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.178792 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerName="sg-core" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.178815 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="0767ff35-4ddd-4785-8538-3d65f777518d" containerName="keystone-api" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.178830 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="49332496-5e7e-426e-9d51-aee9479d8a0d" containerName="ceilometer-notification-agent" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.187299 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.187555 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.193139 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.193417 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.233769 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0767ff35-4ddd-4785-8538-3d65f777518d" path="/var/lib/kubelet/pods/0767ff35-4ddd-4785-8538-3d65f777518d/volumes" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.234442 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49332496-5e7e-426e-9d51-aee9479d8a0d" path="/var/lib/kubelet/pods/49332496-5e7e-426e-9d51-aee9479d8a0d/volumes" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.360622 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.361272 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-run-httpd\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.361349 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.361536 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-log-httpd\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.361698 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-config-data\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.361876 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-scripts\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.361961 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96vjm\" (UniqueName: \"kubernetes.io/projected/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-kube-api-access-96vjm\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.464147 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.464245 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-run-httpd\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.464277 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.464394 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-log-httpd\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.464970 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-run-httpd\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.465113 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-log-httpd\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.465212 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-config-data\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.465681 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-scripts\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.465723 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96vjm\" (UniqueName: \"kubernetes.io/projected/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-kube-api-access-96vjm\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.486455 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-config-data\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.487778 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.488064 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-scripts\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.499062 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.502618 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96vjm\" (UniqueName: \"kubernetes.io/projected/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-kube-api-access-96vjm\") pod \"ceilometer-0\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " pod="openstack/ceilometer-0" Dec 11 18:19:05 crc kubenswrapper[4877]: I1211 18:19:05.523210 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:19:06 crc kubenswrapper[4877]: I1211 18:19:06.022499 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:06 crc kubenswrapper[4877]: I1211 18:19:06.754718 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c8eee2c-7c83-457a-965d-a1dca2a70ea3","Type":"ContainerStarted","Data":"28ce5852c00a0281d69f4481a9d304469afb7e48848a3878fa38edae977e9c62"} Dec 11 18:19:07 crc kubenswrapper[4877]: I1211 18:19:07.385043 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:19:07 crc kubenswrapper[4877]: I1211 18:19:07.385773 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="988ac866-4d7f-4417-9461-57187fe0ffb6" containerName="glance-log" containerID="cri-o://bd82ae6209ba4077a48213d4f4acb433cbf0d036a0c238bd62c8544955f70c8c" gracePeriod=30 Dec 11 18:19:07 crc kubenswrapper[4877]: I1211 18:19:07.385984 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="988ac866-4d7f-4417-9461-57187fe0ffb6" containerName="glance-httpd" containerID="cri-o://bd70b0974b290e0f8bc491b9e182f6a1f8781e5b16dd57abac3619a74deb3543" gracePeriod=30 Dec 11 18:19:07 crc kubenswrapper[4877]: I1211 18:19:07.769877 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c8eee2c-7c83-457a-965d-a1dca2a70ea3","Type":"ContainerStarted","Data":"3d315bcc22130fd381e261fa9e4d7b9a436e7eaa25ec0407f295c53702393775"} Dec 11 18:19:07 crc kubenswrapper[4877]: I1211 18:19:07.770489 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c8eee2c-7c83-457a-965d-a1dca2a70ea3","Type":"ContainerStarted","Data":"a1b69f9d32af0c6af5e1c21eb71e5ae1cab494d320aee97bf72442273144c003"} Dec 11 18:19:07 crc kubenswrapper[4877]: I1211 18:19:07.773750 4877 generic.go:334] "Generic (PLEG): container finished" podID="988ac866-4d7f-4417-9461-57187fe0ffb6" containerID="bd82ae6209ba4077a48213d4f4acb433cbf0d036a0c238bd62c8544955f70c8c" exitCode=143 Dec 11 18:19:07 crc kubenswrapper[4877]: I1211 18:19:07.773813 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"988ac866-4d7f-4417-9461-57187fe0ffb6","Type":"ContainerDied","Data":"bd82ae6209ba4077a48213d4f4acb433cbf0d036a0c238bd62c8544955f70c8c"} Dec 11 18:19:08 crc kubenswrapper[4877]: I1211 18:19:08.793653 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c8eee2c-7c83-457a-965d-a1dca2a70ea3","Type":"ContainerStarted","Data":"1533dfbe8607f32046e6667a9bcb6b8661fa33cbee2b70d3fd224a449b035165"} Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.333959 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-79885c8c-7qj69"] Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.336087 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.343231 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.343566 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.344284 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.380829 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-79885c8c-7qj69"] Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.484580 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6776094e-cd5a-4539-9b5c-368030c70458-internal-tls-certs\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.484655 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6776094e-cd5a-4539-9b5c-368030c70458-combined-ca-bundle\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.484690 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6776094e-cd5a-4539-9b5c-368030c70458-log-httpd\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.484733 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6776094e-cd5a-4539-9b5c-368030c70458-config-data\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.484762 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwnmb\" (UniqueName: \"kubernetes.io/projected/6776094e-cd5a-4539-9b5c-368030c70458-kube-api-access-jwnmb\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.484788 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6776094e-cd5a-4539-9b5c-368030c70458-etc-swift\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.484825 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6776094e-cd5a-4539-9b5c-368030c70458-public-tls-certs\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.484848 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6776094e-cd5a-4539-9b5c-368030c70458-run-httpd\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.578594 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.590201 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6776094e-cd5a-4539-9b5c-368030c70458-combined-ca-bundle\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.590289 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6776094e-cd5a-4539-9b5c-368030c70458-log-httpd\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.590329 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6776094e-cd5a-4539-9b5c-368030c70458-config-data\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.590359 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwnmb\" (UniqueName: \"kubernetes.io/projected/6776094e-cd5a-4539-9b5c-368030c70458-kube-api-access-jwnmb\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.590424 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6776094e-cd5a-4539-9b5c-368030c70458-etc-swift\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.590490 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6776094e-cd5a-4539-9b5c-368030c70458-public-tls-certs\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.590523 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6776094e-cd5a-4539-9b5c-368030c70458-run-httpd\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.590571 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6776094e-cd5a-4539-9b5c-368030c70458-internal-tls-certs\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.591068 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6776094e-cd5a-4539-9b5c-368030c70458-log-httpd\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.598933 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6776094e-cd5a-4539-9b5c-368030c70458-run-httpd\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.599509 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6776094e-cd5a-4539-9b5c-368030c70458-config-data\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.604684 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6776094e-cd5a-4539-9b5c-368030c70458-etc-swift\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.613159 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6776094e-cd5a-4539-9b5c-368030c70458-public-tls-certs\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.615252 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6776094e-cd5a-4539-9b5c-368030c70458-combined-ca-bundle\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.617137 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6776094e-cd5a-4539-9b5c-368030c70458-internal-tls-certs\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.623401 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwnmb\" (UniqueName: \"kubernetes.io/projected/6776094e-cd5a-4539-9b5c-368030c70458-kube-api-access-jwnmb\") pod \"swift-proxy-79885c8c-7qj69\" (UID: \"6776094e-cd5a-4539-9b5c-368030c70458\") " pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.683029 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.817727 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.818482 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d38a8876-fdda-4682-938f-bb74481adf46" containerName="glance-log" containerID="cri-o://75a63f7fbbca5ebe1e16d9116c17598de01ed362da13bf36f9be0d50674c126d" gracePeriod=30 Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.818987 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d38a8876-fdda-4682-938f-bb74481adf46" containerName="glance-httpd" containerID="cri-o://d979f5d4390e9375dce75c838a421ac8bf08748ed047faa4e573de09b7016298" gracePeriod=30 Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.826292 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="d38a8876-fdda-4682-938f-bb74481adf46" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": EOF" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.826324 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-internal-api-0" podUID="d38a8876-fdda-4682-938f-bb74481adf46" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": EOF" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.826290 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-internal-api-0" podUID="d38a8876-fdda-4682-938f-bb74481adf46" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": EOF" Dec 11 18:19:09 crc kubenswrapper[4877]: I1211 18:19:09.827860 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="d38a8876-fdda-4682-938f-bb74481adf46" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": EOF" Dec 11 18:19:10 crc kubenswrapper[4877]: I1211 18:19:10.339722 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:10 crc kubenswrapper[4877]: I1211 18:19:10.594131 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="988ac866-4d7f-4417-9461-57187fe0ffb6" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": read tcp 10.217.0.2:49900->10.217.0.150:9292: read: connection reset by peer" Dec 11 18:19:10 crc kubenswrapper[4877]: I1211 18:19:10.594131 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="988ac866-4d7f-4417-9461-57187fe0ffb6" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": read tcp 10.217.0.2:49888->10.217.0.150:9292: read: connection reset by peer" Dec 11 18:19:10 crc kubenswrapper[4877]: I1211 18:19:10.664023 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5757846754-b6r7j" podUID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Dec 11 18:19:10 crc kubenswrapper[4877]: I1211 18:19:10.828307 4877 generic.go:334] "Generic (PLEG): container finished" podID="988ac866-4d7f-4417-9461-57187fe0ffb6" containerID="bd70b0974b290e0f8bc491b9e182f6a1f8781e5b16dd57abac3619a74deb3543" exitCode=0 Dec 11 18:19:10 crc kubenswrapper[4877]: I1211 18:19:10.828366 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"988ac866-4d7f-4417-9461-57187fe0ffb6","Type":"ContainerDied","Data":"bd70b0974b290e0f8bc491b9e182f6a1f8781e5b16dd57abac3619a74deb3543"} Dec 11 18:19:10 crc kubenswrapper[4877]: I1211 18:19:10.833090 4877 generic.go:334] "Generic (PLEG): container finished" podID="d38a8876-fdda-4682-938f-bb74481adf46" containerID="75a63f7fbbca5ebe1e16d9116c17598de01ed362da13bf36f9be0d50674c126d" exitCode=143 Dec 11 18:19:10 crc kubenswrapper[4877]: I1211 18:19:10.833125 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d38a8876-fdda-4682-938f-bb74481adf46","Type":"ContainerDied","Data":"75a63f7fbbca5ebe1e16d9116c17598de01ed362da13bf36f9be0d50674c126d"} Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.399807 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-44gb8"] Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.401820 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-44gb8" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.417025 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2a65-account-create-update-vf2wg"] Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.424568 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2a65-account-create-update-vf2wg" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.428659 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.432181 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdqsm\" (UniqueName: \"kubernetes.io/projected/2a34fa8b-c5b0-4297-bbc6-609bf82854f7-kube-api-access-jdqsm\") pod \"nova-api-2a65-account-create-update-vf2wg\" (UID: \"2a34fa8b-c5b0-4297-bbc6-609bf82854f7\") " pod="openstack/nova-api-2a65-account-create-update-vf2wg" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.432338 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a34fa8b-c5b0-4297-bbc6-609bf82854f7-operator-scripts\") pod \"nova-api-2a65-account-create-update-vf2wg\" (UID: \"2a34fa8b-c5b0-4297-bbc6-609bf82854f7\") " pod="openstack/nova-api-2a65-account-create-update-vf2wg" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.436079 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-44gb8"] Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.475475 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2a65-account-create-update-vf2wg"] Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.535077 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sjqc\" (UniqueName: \"kubernetes.io/projected/18b57b46-038c-402a-ab72-36f2870a32fd-kube-api-access-8sjqc\") pod \"nova-api-db-create-44gb8\" (UID: \"18b57b46-038c-402a-ab72-36f2870a32fd\") " pod="openstack/nova-api-db-create-44gb8" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.535215 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a34fa8b-c5b0-4297-bbc6-609bf82854f7-operator-scripts\") pod \"nova-api-2a65-account-create-update-vf2wg\" (UID: \"2a34fa8b-c5b0-4297-bbc6-609bf82854f7\") " pod="openstack/nova-api-2a65-account-create-update-vf2wg" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.535268 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdqsm\" (UniqueName: \"kubernetes.io/projected/2a34fa8b-c5b0-4297-bbc6-609bf82854f7-kube-api-access-jdqsm\") pod \"nova-api-2a65-account-create-update-vf2wg\" (UID: \"2a34fa8b-c5b0-4297-bbc6-609bf82854f7\") " pod="openstack/nova-api-2a65-account-create-update-vf2wg" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.535299 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b57b46-038c-402a-ab72-36f2870a32fd-operator-scripts\") pod \"nova-api-db-create-44gb8\" (UID: \"18b57b46-038c-402a-ab72-36f2870a32fd\") " pod="openstack/nova-api-db-create-44gb8" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.536557 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a34fa8b-c5b0-4297-bbc6-609bf82854f7-operator-scripts\") pod \"nova-api-2a65-account-create-update-vf2wg\" (UID: \"2a34fa8b-c5b0-4297-bbc6-609bf82854f7\") " pod="openstack/nova-api-2a65-account-create-update-vf2wg" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.560109 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdqsm\" (UniqueName: \"kubernetes.io/projected/2a34fa8b-c5b0-4297-bbc6-609bf82854f7-kube-api-access-jdqsm\") pod \"nova-api-2a65-account-create-update-vf2wg\" (UID: \"2a34fa8b-c5b0-4297-bbc6-609bf82854f7\") " pod="openstack/nova-api-2a65-account-create-update-vf2wg" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.579449 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-ncpkj"] Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.581051 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ncpkj" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.609599 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ncpkj"] Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.638063 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sjqc\" (UniqueName: \"kubernetes.io/projected/18b57b46-038c-402a-ab72-36f2870a32fd-kube-api-access-8sjqc\") pod \"nova-api-db-create-44gb8\" (UID: \"18b57b46-038c-402a-ab72-36f2870a32fd\") " pod="openstack/nova-api-db-create-44gb8" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.638219 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b57b46-038c-402a-ab72-36f2870a32fd-operator-scripts\") pod \"nova-api-db-create-44gb8\" (UID: \"18b57b46-038c-402a-ab72-36f2870a32fd\") " pod="openstack/nova-api-db-create-44gb8" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.639083 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b57b46-038c-402a-ab72-36f2870a32fd-operator-scripts\") pod \"nova-api-db-create-44gb8\" (UID: \"18b57b46-038c-402a-ab72-36f2870a32fd\") " pod="openstack/nova-api-db-create-44gb8" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.643424 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-a10b-account-create-update-csxdx"] Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.644976 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a10b-account-create-update-csxdx" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.649309 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.681146 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a10b-account-create-update-csxdx"] Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.684288 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sjqc\" (UniqueName: \"kubernetes.io/projected/18b57b46-038c-402a-ab72-36f2870a32fd-kube-api-access-8sjqc\") pod \"nova-api-db-create-44gb8\" (UID: \"18b57b46-038c-402a-ab72-36f2870a32fd\") " pod="openstack/nova-api-db-create-44gb8" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.726538 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-44gb8" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.735484 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-s6qkp"] Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.737100 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6qkp" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.742929 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-s6qkp"] Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.748183 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fefe31f3-c374-42c0-9af1-a7e2d095bc6d-operator-scripts\") pod \"nova-cell0-db-create-ncpkj\" (UID: \"fefe31f3-c374-42c0-9af1-a7e2d095bc6d\") " pod="openstack/nova-cell0-db-create-ncpkj" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.748324 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9254g\" (UniqueName: \"kubernetes.io/projected/fefe31f3-c374-42c0-9af1-a7e2d095bc6d-kube-api-access-9254g\") pod \"nova-cell0-db-create-ncpkj\" (UID: \"fefe31f3-c374-42c0-9af1-a7e2d095bc6d\") " pod="openstack/nova-cell0-db-create-ncpkj" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.767092 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2a65-account-create-update-vf2wg" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.852910 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb8b32a-6068-41da-bac0-f13c2a25e815-operator-scripts\") pod \"nova-cell0-a10b-account-create-update-csxdx\" (UID: \"feb8b32a-6068-41da-bac0-f13c2a25e815\") " pod="openstack/nova-cell0-a10b-account-create-update-csxdx" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.853338 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc8nt\" (UniqueName: \"kubernetes.io/projected/f0501ada-a122-49bd-a65b-52ff7ee6fe00-kube-api-access-rc8nt\") pod \"nova-cell1-db-create-s6qkp\" (UID: \"f0501ada-a122-49bd-a65b-52ff7ee6fe00\") " pod="openstack/nova-cell1-db-create-s6qkp" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.857754 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtrjf\" (UniqueName: \"kubernetes.io/projected/feb8b32a-6068-41da-bac0-f13c2a25e815-kube-api-access-xtrjf\") pod \"nova-cell0-a10b-account-create-update-csxdx\" (UID: \"feb8b32a-6068-41da-bac0-f13c2a25e815\") " pod="openstack/nova-cell0-a10b-account-create-update-csxdx" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.857885 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fefe31f3-c374-42c0-9af1-a7e2d095bc6d-operator-scripts\") pod \"nova-cell0-db-create-ncpkj\" (UID: \"fefe31f3-c374-42c0-9af1-a7e2d095bc6d\") " pod="openstack/nova-cell0-db-create-ncpkj" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.858078 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9254g\" (UniqueName: \"kubernetes.io/projected/fefe31f3-c374-42c0-9af1-a7e2d095bc6d-kube-api-access-9254g\") pod \"nova-cell0-db-create-ncpkj\" (UID: \"fefe31f3-c374-42c0-9af1-a7e2d095bc6d\") " pod="openstack/nova-cell0-db-create-ncpkj" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.858247 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0501ada-a122-49bd-a65b-52ff7ee6fe00-operator-scripts\") pod \"nova-cell1-db-create-s6qkp\" (UID: \"f0501ada-a122-49bd-a65b-52ff7ee6fe00\") " pod="openstack/nova-cell1-db-create-s6qkp" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.859627 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fefe31f3-c374-42c0-9af1-a7e2d095bc6d-operator-scripts\") pod \"nova-cell0-db-create-ncpkj\" (UID: \"fefe31f3-c374-42c0-9af1-a7e2d095bc6d\") " pod="openstack/nova-cell0-db-create-ncpkj" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.888570 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9254g\" (UniqueName: \"kubernetes.io/projected/fefe31f3-c374-42c0-9af1-a7e2d095bc6d-kube-api-access-9254g\") pod \"nova-cell0-db-create-ncpkj\" (UID: \"fefe31f3-c374-42c0-9af1-a7e2d095bc6d\") " pod="openstack/nova-cell0-db-create-ncpkj" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.893467 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3f44-account-create-update-r8mvs"] Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.895308 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.910395 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.916147 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3f44-account-create-update-r8mvs"] Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.960746 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ncpkj" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.963816 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0501ada-a122-49bd-a65b-52ff7ee6fe00-operator-scripts\") pod \"nova-cell1-db-create-s6qkp\" (UID: \"f0501ada-a122-49bd-a65b-52ff7ee6fe00\") " pod="openstack/nova-cell1-db-create-s6qkp" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.963925 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb8b32a-6068-41da-bac0-f13c2a25e815-operator-scripts\") pod \"nova-cell0-a10b-account-create-update-csxdx\" (UID: \"feb8b32a-6068-41da-bac0-f13c2a25e815\") " pod="openstack/nova-cell0-a10b-account-create-update-csxdx" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.963958 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc8nt\" (UniqueName: \"kubernetes.io/projected/f0501ada-a122-49bd-a65b-52ff7ee6fe00-kube-api-access-rc8nt\") pod \"nova-cell1-db-create-s6qkp\" (UID: \"f0501ada-a122-49bd-a65b-52ff7ee6fe00\") " pod="openstack/nova-cell1-db-create-s6qkp" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.964027 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtrjf\" (UniqueName: \"kubernetes.io/projected/feb8b32a-6068-41da-bac0-f13c2a25e815-kube-api-access-xtrjf\") pod \"nova-cell0-a10b-account-create-update-csxdx\" (UID: \"feb8b32a-6068-41da-bac0-f13c2a25e815\") " pod="openstack/nova-cell0-a10b-account-create-update-csxdx" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.966905 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb8b32a-6068-41da-bac0-f13c2a25e815-operator-scripts\") pod \"nova-cell0-a10b-account-create-update-csxdx\" (UID: \"feb8b32a-6068-41da-bac0-f13c2a25e815\") " pod="openstack/nova-cell0-a10b-account-create-update-csxdx" Dec 11 18:19:11 crc kubenswrapper[4877]: I1211 18:19:11.974969 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0501ada-a122-49bd-a65b-52ff7ee6fe00-operator-scripts\") pod \"nova-cell1-db-create-s6qkp\" (UID: \"f0501ada-a122-49bd-a65b-52ff7ee6fe00\") " pod="openstack/nova-cell1-db-create-s6qkp" Dec 11 18:19:12 crc kubenswrapper[4877]: I1211 18:19:12.011838 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc8nt\" (UniqueName: \"kubernetes.io/projected/f0501ada-a122-49bd-a65b-52ff7ee6fe00-kube-api-access-rc8nt\") pod \"nova-cell1-db-create-s6qkp\" (UID: \"f0501ada-a122-49bd-a65b-52ff7ee6fe00\") " pod="openstack/nova-cell1-db-create-s6qkp" Dec 11 18:19:12 crc kubenswrapper[4877]: I1211 18:19:12.022098 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtrjf\" (UniqueName: \"kubernetes.io/projected/feb8b32a-6068-41da-bac0-f13c2a25e815-kube-api-access-xtrjf\") pod \"nova-cell0-a10b-account-create-update-csxdx\" (UID: \"feb8b32a-6068-41da-bac0-f13c2a25e815\") " pod="openstack/nova-cell0-a10b-account-create-update-csxdx" Dec 11 18:19:12 crc kubenswrapper[4877]: I1211 18:19:12.029393 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a10b-account-create-update-csxdx" Dec 11 18:19:12 crc kubenswrapper[4877]: I1211 18:19:12.065744 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce805e83-9fac-4e8d-a823-33210302631d-operator-scripts\") pod \"nova-cell1-3f44-account-create-update-r8mvs\" (UID: \"ce805e83-9fac-4e8d-a823-33210302631d\") " pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" Dec 11 18:19:12 crc kubenswrapper[4877]: I1211 18:19:12.065854 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7mnc\" (UniqueName: \"kubernetes.io/projected/ce805e83-9fac-4e8d-a823-33210302631d-kube-api-access-h7mnc\") pod \"nova-cell1-3f44-account-create-update-r8mvs\" (UID: \"ce805e83-9fac-4e8d-a823-33210302631d\") " pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" Dec 11 18:19:12 crc kubenswrapper[4877]: I1211 18:19:12.106831 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6qkp" Dec 11 18:19:12 crc kubenswrapper[4877]: I1211 18:19:12.167984 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce805e83-9fac-4e8d-a823-33210302631d-operator-scripts\") pod \"nova-cell1-3f44-account-create-update-r8mvs\" (UID: \"ce805e83-9fac-4e8d-a823-33210302631d\") " pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" Dec 11 18:19:12 crc kubenswrapper[4877]: I1211 18:19:12.168103 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7mnc\" (UniqueName: \"kubernetes.io/projected/ce805e83-9fac-4e8d-a823-33210302631d-kube-api-access-h7mnc\") pod \"nova-cell1-3f44-account-create-update-r8mvs\" (UID: \"ce805e83-9fac-4e8d-a823-33210302631d\") " pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" Dec 11 18:19:12 crc kubenswrapper[4877]: I1211 18:19:12.169321 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce805e83-9fac-4e8d-a823-33210302631d-operator-scripts\") pod \"nova-cell1-3f44-account-create-update-r8mvs\" (UID: \"ce805e83-9fac-4e8d-a823-33210302631d\") " pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" Dec 11 18:19:12 crc kubenswrapper[4877]: I1211 18:19:12.202048 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7mnc\" (UniqueName: \"kubernetes.io/projected/ce805e83-9fac-4e8d-a823-33210302631d-kube-api-access-h7mnc\") pod \"nova-cell1-3f44-account-create-update-r8mvs\" (UID: \"ce805e83-9fac-4e8d-a823-33210302631d\") " pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" Dec 11 18:19:12 crc kubenswrapper[4877]: I1211 18:19:12.277101 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" Dec 11 18:19:15 crc kubenswrapper[4877]: I1211 18:19:15.291557 4877 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pode1776d1d-2543-4923-8d58-08610435d2fe"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pode1776d1d-2543-4923-8d58-08610435d2fe] : Timed out while waiting for systemd to remove kubepods-besteffort-pode1776d1d_2543_4923_8d58_08610435d2fe.slice" Dec 11 18:19:15 crc kubenswrapper[4877]: E1211 18:19:15.292144 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pode1776d1d-2543-4923-8d58-08610435d2fe] : unable to destroy cgroup paths for cgroup [kubepods besteffort pode1776d1d-2543-4923-8d58-08610435d2fe] : Timed out while waiting for systemd to remove kubepods-besteffort-pode1776d1d_2543_4923_8d58_08610435d2fe.slice" pod="openstack/cinder-api-0" podUID="e1776d1d-2543-4923-8d58-08610435d2fe" Dec 11 18:19:15 crc kubenswrapper[4877]: I1211 18:19:15.932087 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 18:19:15 crc kubenswrapper[4877]: I1211 18:19:15.962904 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 18:19:15 crc kubenswrapper[4877]: I1211 18:19:15.974409 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.010413 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.044585 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.046277 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.053636 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.053897 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.054457 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.178984 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-config-data\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.179081 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.179117 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqv88\" (UniqueName: \"kubernetes.io/projected/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-kube-api-access-jqv88\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.179145 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.179775 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-scripts\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.179899 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.180104 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-logs\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.180650 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.180921 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-config-data-custom\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.283614 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-config-data\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.283713 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.283752 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqv88\" (UniqueName: \"kubernetes.io/projected/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-kube-api-access-jqv88\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.283791 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.283862 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-scripts\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.283906 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.284001 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-logs\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.284072 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.284123 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-config-data-custom\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.286708 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.287476 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-logs\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.296100 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.297276 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-config-data\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.297362 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.298927 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-scripts\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.305238 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-config-data-custom\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.310425 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.310889 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqv88\" (UniqueName: \"kubernetes.io/projected/fcf8e591-86a6-4c17-89a0-9d93ec7bb590-kube-api-access-jqv88\") pod \"cinder-api-0\" (UID: \"fcf8e591-86a6-4c17-89a0-9d93ec7bb590\") " pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.379885 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.966205 4877 generic.go:334] "Generic (PLEG): container finished" podID="d38a8876-fdda-4682-938f-bb74481adf46" containerID="d979f5d4390e9375dce75c838a421ac8bf08748ed047faa4e573de09b7016298" exitCode=0 Dec 11 18:19:16 crc kubenswrapper[4877]: I1211 18:19:16.966261 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d38a8876-fdda-4682-938f-bb74481adf46","Type":"ContainerDied","Data":"d979f5d4390e9375dce75c838a421ac8bf08748ed047faa4e573de09b7016298"} Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.228664 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1776d1d-2543-4923-8d58-08610435d2fe" path="/var/lib/kubelet/pods/e1776d1d-2543-4923-8d58-08610435d2fe/volumes" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.614653 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.656567 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/988ac866-4d7f-4417-9461-57187fe0ffb6-httpd-run\") pod \"988ac866-4d7f-4417-9461-57187fe0ffb6\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.656650 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/988ac866-4d7f-4417-9461-57187fe0ffb6-logs\") pod \"988ac866-4d7f-4417-9461-57187fe0ffb6\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.656705 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-config-data\") pod \"988ac866-4d7f-4417-9461-57187fe0ffb6\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.656753 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmvgj\" (UniqueName: \"kubernetes.io/projected/988ac866-4d7f-4417-9461-57187fe0ffb6-kube-api-access-xmvgj\") pod \"988ac866-4d7f-4417-9461-57187fe0ffb6\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.656808 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-combined-ca-bundle\") pod \"988ac866-4d7f-4417-9461-57187fe0ffb6\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.656829 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-scripts\") pod \"988ac866-4d7f-4417-9461-57187fe0ffb6\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.656849 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-public-tls-certs\") pod \"988ac866-4d7f-4417-9461-57187fe0ffb6\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.656902 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"988ac866-4d7f-4417-9461-57187fe0ffb6\" (UID: \"988ac866-4d7f-4417-9461-57187fe0ffb6\") " Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.660593 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/988ac866-4d7f-4417-9461-57187fe0ffb6-logs" (OuterVolumeSpecName: "logs") pod "988ac866-4d7f-4417-9461-57187fe0ffb6" (UID: "988ac866-4d7f-4417-9461-57187fe0ffb6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.660948 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/988ac866-4d7f-4417-9461-57187fe0ffb6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "988ac866-4d7f-4417-9461-57187fe0ffb6" (UID: "988ac866-4d7f-4417-9461-57187fe0ffb6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.665642 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "988ac866-4d7f-4417-9461-57187fe0ffb6" (UID: "988ac866-4d7f-4417-9461-57187fe0ffb6"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.669275 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-scripts" (OuterVolumeSpecName: "scripts") pod "988ac866-4d7f-4417-9461-57187fe0ffb6" (UID: "988ac866-4d7f-4417-9461-57187fe0ffb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.669433 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/988ac866-4d7f-4417-9461-57187fe0ffb6-kube-api-access-xmvgj" (OuterVolumeSpecName: "kube-api-access-xmvgj") pod "988ac866-4d7f-4417-9461-57187fe0ffb6" (UID: "988ac866-4d7f-4417-9461-57187fe0ffb6"). InnerVolumeSpecName "kube-api-access-xmvgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.748732 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "988ac866-4d7f-4417-9461-57187fe0ffb6" (UID: "988ac866-4d7f-4417-9461-57187fe0ffb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.759722 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmvgj\" (UniqueName: \"kubernetes.io/projected/988ac866-4d7f-4417-9461-57187fe0ffb6-kube-api-access-xmvgj\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.759890 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.759985 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.760089 4877 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.760165 4877 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/988ac866-4d7f-4417-9461-57187fe0ffb6-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.760252 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/988ac866-4d7f-4417-9461-57187fe0ffb6-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.793549 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "988ac866-4d7f-4417-9461-57187fe0ffb6" (UID: "988ac866-4d7f-4417-9461-57187fe0ffb6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.805757 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-config-data" (OuterVolumeSpecName: "config-data") pod "988ac866-4d7f-4417-9461-57187fe0ffb6" (UID: "988ac866-4d7f-4417-9461-57187fe0ffb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.809910 4877 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.861035 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.861080 4877 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/988ac866-4d7f-4417-9461-57187fe0ffb6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.861094 4877 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.965763 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.979664 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"22343eeb-fed7-457f-a507-a83d4071ee3a","Type":"ContainerStarted","Data":"c77e233c4d85503b6fca54edbae97f40ddac0c28862e7e7967ce04a93a66cdff"} Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.986545 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d38a8876-fdda-4682-938f-bb74481adf46","Type":"ContainerDied","Data":"f64980c6c0f33897cde471ac21d4405fd40fabdd612b894bedd431fc961e04f5"} Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.986627 4877 scope.go:117] "RemoveContainer" containerID="d979f5d4390e9375dce75c838a421ac8bf08748ed047faa4e573de09b7016298" Dec 11 18:19:17 crc kubenswrapper[4877]: I1211 18:19:17.986772 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.002993 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"988ac866-4d7f-4417-9461-57187fe0ffb6","Type":"ContainerDied","Data":"eb122de0e8f917e5a4e2925f94a42f19d671d5ce51e4d60e55bff389a3858462"} Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.003587 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.015549 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.744358946 podStartE2EDuration="16.015527983s" podCreationTimestamp="2025-12-11 18:19:02 +0000 UTC" firstStartedPulling="2025-12-11 18:19:03.117539188 +0000 UTC m=+1104.143783222" lastFinishedPulling="2025-12-11 18:19:17.388708215 +0000 UTC m=+1118.414952259" observedRunningTime="2025-12-11 18:19:18.014840685 +0000 UTC m=+1119.041084729" watchObservedRunningTime="2025-12-11 18:19:18.015527983 +0000 UTC m=+1119.041772027" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.056706 4877 scope.go:117] "RemoveContainer" containerID="75a63f7fbbca5ebe1e16d9116c17598de01ed362da13bf36f9be0d50674c126d" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.064644 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"d38a8876-fdda-4682-938f-bb74481adf46\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.064870 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-internal-tls-certs\") pod \"d38a8876-fdda-4682-938f-bb74481adf46\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.065154 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdp6p\" (UniqueName: \"kubernetes.io/projected/d38a8876-fdda-4682-938f-bb74481adf46-kube-api-access-fdp6p\") pod \"d38a8876-fdda-4682-938f-bb74481adf46\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.068519 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d38a8876-fdda-4682-938f-bb74481adf46-logs" (OuterVolumeSpecName: "logs") pod "d38a8876-fdda-4682-938f-bb74481adf46" (UID: "d38a8876-fdda-4682-938f-bb74481adf46"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.066442 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d38a8876-fdda-4682-938f-bb74481adf46-logs\") pod \"d38a8876-fdda-4682-938f-bb74481adf46\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.073191 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-combined-ca-bundle\") pod \"d38a8876-fdda-4682-938f-bb74481adf46\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.074497 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-config-data\") pod \"d38a8876-fdda-4682-938f-bb74481adf46\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.074594 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-scripts\") pod \"d38a8876-fdda-4682-938f-bb74481adf46\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.074782 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d38a8876-fdda-4682-938f-bb74481adf46-httpd-run\") pod \"d38a8876-fdda-4682-938f-bb74481adf46\" (UID: \"d38a8876-fdda-4682-938f-bb74481adf46\") " Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.075892 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d38a8876-fdda-4682-938f-bb74481adf46-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.073253 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "d38a8876-fdda-4682-938f-bb74481adf46" (UID: "d38a8876-fdda-4682-938f-bb74481adf46"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.080300 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d38a8876-fdda-4682-938f-bb74481adf46-kube-api-access-fdp6p" (OuterVolumeSpecName: "kube-api-access-fdp6p") pod "d38a8876-fdda-4682-938f-bb74481adf46" (UID: "d38a8876-fdda-4682-938f-bb74481adf46"). InnerVolumeSpecName "kube-api-access-fdp6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.082803 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d38a8876-fdda-4682-938f-bb74481adf46-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d38a8876-fdda-4682-938f-bb74481adf46" (UID: "d38a8876-fdda-4682-938f-bb74481adf46"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.102497 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.103793 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-scripts" (OuterVolumeSpecName: "scripts") pod "d38a8876-fdda-4682-938f-bb74481adf46" (UID: "d38a8876-fdda-4682-938f-bb74481adf46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.109684 4877 scope.go:117] "RemoveContainer" containerID="bd70b0974b290e0f8bc491b9e182f6a1f8781e5b16dd57abac3619a74deb3543" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.118299 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.147548 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:19:18 crc kubenswrapper[4877]: E1211 18:19:18.148116 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d38a8876-fdda-4682-938f-bb74481adf46" containerName="glance-httpd" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.148138 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38a8876-fdda-4682-938f-bb74481adf46" containerName="glance-httpd" Dec 11 18:19:18 crc kubenswrapper[4877]: E1211 18:19:18.148176 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d38a8876-fdda-4682-938f-bb74481adf46" containerName="glance-log" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.148186 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38a8876-fdda-4682-938f-bb74481adf46" containerName="glance-log" Dec 11 18:19:18 crc kubenswrapper[4877]: E1211 18:19:18.148200 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988ac866-4d7f-4417-9461-57187fe0ffb6" containerName="glance-log" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.148208 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="988ac866-4d7f-4417-9461-57187fe0ffb6" containerName="glance-log" Dec 11 18:19:18 crc kubenswrapper[4877]: E1211 18:19:18.148231 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988ac866-4d7f-4417-9461-57187fe0ffb6" containerName="glance-httpd" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.148239 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="988ac866-4d7f-4417-9461-57187fe0ffb6" containerName="glance-httpd" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.151737 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="988ac866-4d7f-4417-9461-57187fe0ffb6" containerName="glance-log" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.151771 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="d38a8876-fdda-4682-938f-bb74481adf46" containerName="glance-log" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.151784 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="d38a8876-fdda-4682-938f-bb74481adf46" containerName="glance-httpd" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.151793 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="988ac866-4d7f-4417-9461-57187fe0ffb6" containerName="glance-httpd" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.152933 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.170739 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.171081 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.171097 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.176807 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdp6p\" (UniqueName: \"kubernetes.io/projected/d38a8876-fdda-4682-938f-bb74481adf46-kube-api-access-fdp6p\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.176834 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.176848 4877 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d38a8876-fdda-4682-938f-bb74481adf46-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.176872 4877 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.180730 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d38a8876-fdda-4682-938f-bb74481adf46" (UID: "d38a8876-fdda-4682-938f-bb74481adf46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.180987 4877 scope.go:117] "RemoveContainer" containerID="bd82ae6209ba4077a48213d4f4acb433cbf0d036a0c238bd62c8544955f70c8c" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.202851 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d38a8876-fdda-4682-938f-bb74481adf46" (UID: "d38a8876-fdda-4682-938f-bb74481adf46"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.206768 4877 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.228524 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-config-data" (OuterVolumeSpecName: "config-data") pod "d38a8876-fdda-4682-938f-bb74481adf46" (UID: "d38a8876-fdda-4682-938f-bb74481adf46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.283804 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.284019 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-config-data\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.285327 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.285451 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrg4\" (UniqueName: \"kubernetes.io/projected/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-kube-api-access-cqrg4\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.285573 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.285755 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-logs\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.285863 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-scripts\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.285890 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.286017 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.286038 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.295947 4877 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.298469 4877 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d38a8876-fdda-4682-938f-bb74481adf46-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.384018 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.400390 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqrg4\" (UniqueName: \"kubernetes.io/projected/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-kube-api-access-cqrg4\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.400505 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.400607 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-logs\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.400664 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-scripts\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.400705 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.400767 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.400884 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-config-data\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.400948 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.400988 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.401401 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-logs\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.401525 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.406247 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.408221 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.410185 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.412778 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-scripts\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.420260 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.422435 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.422433 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqrg4\" (UniqueName: \"kubernetes.io/projected/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-kube-api-access-cqrg4\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.426026 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8011cc1-2d08-433e-bc2b-71f11aa75cd2-config-data\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.429211 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.433416 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.436099 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.464073 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a8011cc1-2d08-433e-bc2b-71f11aa75cd2\") " pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.506199 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: E1211 18:19:18.530454 4877 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd38a8876_fdda_4682_938f_bb74481adf46.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd38a8876_fdda_4682_938f_bb74481adf46.slice/crio-f64980c6c0f33897cde471ac21d4405fd40fabdd612b894bedd431fc961e04f5\": RecentStats: unable to find data in memory cache]" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.605874 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11c62194-8ad1-4529-98d8-7ad070a3ac30-scripts\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.606344 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2v6d\" (UniqueName: \"kubernetes.io/projected/11c62194-8ad1-4529-98d8-7ad070a3ac30-kube-api-access-q2v6d\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.606413 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.606508 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11c62194-8ad1-4529-98d8-7ad070a3ac30-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.606568 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11c62194-8ad1-4529-98d8-7ad070a3ac30-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.606604 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11c62194-8ad1-4529-98d8-7ad070a3ac30-config-data\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.606633 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11c62194-8ad1-4529-98d8-7ad070a3ac30-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.606653 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11c62194-8ad1-4529-98d8-7ad070a3ac30-logs\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.711713 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11c62194-8ad1-4529-98d8-7ad070a3ac30-scripts\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.711776 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2v6d\" (UniqueName: \"kubernetes.io/projected/11c62194-8ad1-4529-98d8-7ad070a3ac30-kube-api-access-q2v6d\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.711823 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.711873 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11c62194-8ad1-4529-98d8-7ad070a3ac30-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.711907 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11c62194-8ad1-4529-98d8-7ad070a3ac30-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.711934 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11c62194-8ad1-4529-98d8-7ad070a3ac30-config-data\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.711962 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11c62194-8ad1-4529-98d8-7ad070a3ac30-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.711981 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11c62194-8ad1-4529-98d8-7ad070a3ac30-logs\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.712769 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11c62194-8ad1-4529-98d8-7ad070a3ac30-logs\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.714785 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11c62194-8ad1-4529-98d8-7ad070a3ac30-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.715046 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.720654 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11c62194-8ad1-4529-98d8-7ad070a3ac30-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.720668 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11c62194-8ad1-4529-98d8-7ad070a3ac30-config-data\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.728185 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11c62194-8ad1-4529-98d8-7ad070a3ac30-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.733024 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11c62194-8ad1-4529-98d8-7ad070a3ac30-scripts\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.733329 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2v6d\" (UniqueName: \"kubernetes.io/projected/11c62194-8ad1-4529-98d8-7ad070a3ac30-kube-api-access-q2v6d\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.775822 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"11c62194-8ad1-4529-98d8-7ad070a3ac30\") " pod="openstack/glance-default-internal-api-0" Dec 11 18:19:18 crc kubenswrapper[4877]: I1211 18:19:18.903999 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 18:19:19 crc kubenswrapper[4877]: I1211 18:19:19.013912 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-s6qkp"] Dec 11 18:19:19 crc kubenswrapper[4877]: I1211 18:19:19.028421 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-44gb8"] Dec 11 18:19:19 crc kubenswrapper[4877]: I1211 18:19:19.041214 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2a65-account-create-update-vf2wg"] Dec 11 18:19:19 crc kubenswrapper[4877]: I1211 18:19:19.052296 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a10b-account-create-update-csxdx"] Dec 11 18:19:19 crc kubenswrapper[4877]: W1211 18:19:19.076767 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18b57b46_038c_402a_ab72_36f2870a32fd.slice/crio-f6218ab36c4599ccfed447d0e9a10d084bbf40f43aea3a7178cba80314903744 WatchSource:0}: Error finding container f6218ab36c4599ccfed447d0e9a10d084bbf40f43aea3a7178cba80314903744: Status 404 returned error can't find the container with id f6218ab36c4599ccfed447d0e9a10d084bbf40f43aea3a7178cba80314903744 Dec 11 18:19:19 crc kubenswrapper[4877]: I1211 18:19:19.079362 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3f44-account-create-update-r8mvs"] Dec 11 18:19:19 crc kubenswrapper[4877]: W1211 18:19:19.083781 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfefe31f3_c374_42c0_9af1_a7e2d095bc6d.slice/crio-d1665894f0f84497681db4316df5e161d8df0eb78ecd5c7f583fd859b9de596f WatchSource:0}: Error finding container d1665894f0f84497681db4316df5e161d8df0eb78ecd5c7f583fd859b9de596f: Status 404 returned error can't find the container with id d1665894f0f84497681db4316df5e161d8df0eb78ecd5c7f583fd859b9de596f Dec 11 18:19:19 crc kubenswrapper[4877]: I1211 18:19:19.094165 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 18:19:19 crc kubenswrapper[4877]: I1211 18:19:19.102154 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ncpkj"] Dec 11 18:19:19 crc kubenswrapper[4877]: I1211 18:19:19.134935 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-79885c8c-7qj69"] Dec 11 18:19:19 crc kubenswrapper[4877]: I1211 18:19:19.265324 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="988ac866-4d7f-4417-9461-57187fe0ffb6" path="/var/lib/kubelet/pods/988ac866-4d7f-4417-9461-57187fe0ffb6/volumes" Dec 11 18:19:19 crc kubenswrapper[4877]: I1211 18:19:19.266407 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d38a8876-fdda-4682-938f-bb74481adf46" path="/var/lib/kubelet/pods/d38a8876-fdda-4682-938f-bb74481adf46/volumes" Dec 11 18:19:19 crc kubenswrapper[4877]: I1211 18:19:19.267154 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 18:19:19 crc kubenswrapper[4877]: I1211 18:19:19.656597 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 18:19:19 crc kubenswrapper[4877]: W1211 18:19:19.714701 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11c62194_8ad1_4529_98d8_7ad070a3ac30.slice/crio-e445bed9a28619197647506ba8f044d9f696483d5fcf840cae9f8c87fa76da45 WatchSource:0}: Error finding container e445bed9a28619197647506ba8f044d9f696483d5fcf840cae9f8c87fa76da45: Status 404 returned error can't find the container with id e445bed9a28619197647506ba8f044d9f696483d5fcf840cae9f8c87fa76da45 Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.099975 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-79885c8c-7qj69" event={"ID":"6776094e-cd5a-4539-9b5c-368030c70458","Type":"ContainerStarted","Data":"59a03d82cbd7eff132921005f0b35a65159c9d1dd7524402f789fd3a54d3400d"} Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.108918 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" event={"ID":"ce805e83-9fac-4e8d-a823-33210302631d","Type":"ContainerStarted","Data":"056d9294d137877c00f2775e30ac2ea37bce34b967f8a64cf7971c162afbe526"} Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.108998 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" event={"ID":"ce805e83-9fac-4e8d-a823-33210302631d","Type":"ContainerStarted","Data":"4d7bb2cc94785f5aa6f78b867a67898f1e8e54f49456b0bf2497355d1f4853c3"} Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.110745 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ncpkj" event={"ID":"fefe31f3-c374-42c0-9af1-a7e2d095bc6d","Type":"ContainerStarted","Data":"d1665894f0f84497681db4316df5e161d8df0eb78ecd5c7f583fd859b9de596f"} Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.119438 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a10b-account-create-update-csxdx" event={"ID":"feb8b32a-6068-41da-bac0-f13c2a25e815","Type":"ContainerStarted","Data":"c3338b4fd32b4b555fa63a909015e496222062b6ec8d787e33027293951f253a"} Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.127886 4877 generic.go:334] "Generic (PLEG): container finished" podID="f0501ada-a122-49bd-a65b-52ff7ee6fe00" containerID="9eddaed452cb9848a0511be0a491bbd1d20878cf5ee8e1fbd39d39c2a93fd142" exitCode=0 Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.128064 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6qkp" event={"ID":"f0501ada-a122-49bd-a65b-52ff7ee6fe00","Type":"ContainerDied","Data":"9eddaed452cb9848a0511be0a491bbd1d20878cf5ee8e1fbd39d39c2a93fd142"} Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.128165 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6qkp" event={"ID":"f0501ada-a122-49bd-a65b-52ff7ee6fe00","Type":"ContainerStarted","Data":"74ad1d85066df676b9fc0901ef3ffadc9a7999a51856496c5fb8b730f16e3512"} Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.132022 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" podStartSLOduration=9.132002078 podStartE2EDuration="9.132002078s" podCreationTimestamp="2025-12-11 18:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:19:20.130053716 +0000 UTC m=+1121.156297780" watchObservedRunningTime="2025-12-11 18:19:20.132002078 +0000 UTC m=+1121.158246122" Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.137004 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11c62194-8ad1-4529-98d8-7ad070a3ac30","Type":"ContainerStarted","Data":"e445bed9a28619197647506ba8f044d9f696483d5fcf840cae9f8c87fa76da45"} Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.142934 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a8011cc1-2d08-433e-bc2b-71f11aa75cd2","Type":"ContainerStarted","Data":"af722a7c00737b60f51a206425fdd1987d9c31507cd44dce03024afd1cc16eef"} Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.145081 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-44gb8" event={"ID":"18b57b46-038c-402a-ab72-36f2870a32fd","Type":"ContainerStarted","Data":"f6218ab36c4599ccfed447d0e9a10d084bbf40f43aea3a7178cba80314903744"} Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.148102 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2a65-account-create-update-vf2wg" event={"ID":"2a34fa8b-c5b0-4297-bbc6-609bf82854f7","Type":"ContainerStarted","Data":"123c1bb3bb51bd13c9e4f76aa821358d89e93b02927c10ab9d23445aeb3cd5af"} Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.170180 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fcf8e591-86a6-4c17-89a0-9d93ec7bb590","Type":"ContainerStarted","Data":"438be5446d87875869befbc348b0e7018dfafe75ec0f8e1d85e78be3d27761f0"} Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.665859 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5757846754-b6r7j" podUID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Dec 11 18:19:20 crc kubenswrapper[4877]: I1211 18:19:20.666303 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.196679 4877 generic.go:334] "Generic (PLEG): container finished" podID="2a34fa8b-c5b0-4297-bbc6-609bf82854f7" containerID="4b7f7b11dd4bb64860113c2af0fea830d8bcda5eb68b312b2be04aff6950b812" exitCode=0 Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.197168 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2a65-account-create-update-vf2wg" event={"ID":"2a34fa8b-c5b0-4297-bbc6-609bf82854f7","Type":"ContainerDied","Data":"4b7f7b11dd4bb64860113c2af0fea830d8bcda5eb68b312b2be04aff6950b812"} Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.202515 4877 generic.go:334] "Generic (PLEG): container finished" podID="fefe31f3-c374-42c0-9af1-a7e2d095bc6d" containerID="9da232cc81696c4f0519fbd27d5cce68f2943a6679d5a012a6b07f8aeea5312f" exitCode=0 Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.202650 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ncpkj" event={"ID":"fefe31f3-c374-42c0-9af1-a7e2d095bc6d","Type":"ContainerDied","Data":"9da232cc81696c4f0519fbd27d5cce68f2943a6679d5a012a6b07f8aeea5312f"} Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.209777 4877 generic.go:334] "Generic (PLEG): container finished" podID="ce805e83-9fac-4e8d-a823-33210302631d" containerID="056d9294d137877c00f2775e30ac2ea37bce34b967f8a64cf7971c162afbe526" exitCode=0 Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.209833 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" event={"ID":"ce805e83-9fac-4e8d-a823-33210302631d","Type":"ContainerDied","Data":"056d9294d137877c00f2775e30ac2ea37bce34b967f8a64cf7971c162afbe526"} Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.225405 4877 generic.go:334] "Generic (PLEG): container finished" podID="feb8b32a-6068-41da-bac0-f13c2a25e815" containerID="fa0f679b4a7bb89bbdc0d4781c54155065ba140b832a69fc2245994f97aa382f" exitCode=0 Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.240599 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-79885c8c-7qj69" event={"ID":"6776094e-cd5a-4539-9b5c-368030c70458","Type":"ContainerStarted","Data":"0eec763cfa9060952dfbc0bfe12417aace89c5eeb16db9719ed0c01867b6a81b"} Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.240656 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.240670 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-79885c8c-7qj69" event={"ID":"6776094e-cd5a-4539-9b5c-368030c70458","Type":"ContainerStarted","Data":"ad2bfaaec16f0da7c53147265ac4c2246dd58dd84c4ba5508ea671c9e387b9dc"} Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.240681 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.240692 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a10b-account-create-update-csxdx" event={"ID":"feb8b32a-6068-41da-bac0-f13c2a25e815","Type":"ContainerDied","Data":"fa0f679b4a7bb89bbdc0d4781c54155065ba140b832a69fc2245994f97aa382f"} Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.242646 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11c62194-8ad1-4529-98d8-7ad070a3ac30","Type":"ContainerStarted","Data":"90dcf13c3b282b5ffb16b47f240c2e482803f3b98a5793a691ac0d164d360af9"} Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.252430 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a8011cc1-2d08-433e-bc2b-71f11aa75cd2","Type":"ContainerStarted","Data":"a55c7ab60c4c49a1f86e45a6f268d75bcc0a4d88a52ee0dca6319caf6b076300"} Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.255042 4877 generic.go:334] "Generic (PLEG): container finished" podID="18b57b46-038c-402a-ab72-36f2870a32fd" containerID="b05c2b7b38f37068eb19015fbff2ac4bbc376d0cf6a01bfe5a33e36652620f6a" exitCode=0 Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.255112 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-44gb8" event={"ID":"18b57b46-038c-402a-ab72-36f2870a32fd","Type":"ContainerDied","Data":"b05c2b7b38f37068eb19015fbff2ac4bbc376d0cf6a01bfe5a33e36652620f6a"} Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.267825 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fcf8e591-86a6-4c17-89a0-9d93ec7bb590","Type":"ContainerStarted","Data":"d832a6f65f4063ff4065209100fdcc3fd2af80ce2fddbc8885784f13ccb076a4"} Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.345745 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-79885c8c-7qj69" podStartSLOduration=12.345714075 podStartE2EDuration="12.345714075s" podCreationTimestamp="2025-12-11 18:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:19:21.334840316 +0000 UTC m=+1122.361084370" watchObservedRunningTime="2025-12-11 18:19:21.345714075 +0000 UTC m=+1122.371958119" Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.866691 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6qkp" Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.921785 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0501ada-a122-49bd-a65b-52ff7ee6fe00-operator-scripts\") pod \"f0501ada-a122-49bd-a65b-52ff7ee6fe00\" (UID: \"f0501ada-a122-49bd-a65b-52ff7ee6fe00\") " Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.921916 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc8nt\" (UniqueName: \"kubernetes.io/projected/f0501ada-a122-49bd-a65b-52ff7ee6fe00-kube-api-access-rc8nt\") pod \"f0501ada-a122-49bd-a65b-52ff7ee6fe00\" (UID: \"f0501ada-a122-49bd-a65b-52ff7ee6fe00\") " Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.923461 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0501ada-a122-49bd-a65b-52ff7ee6fe00-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0501ada-a122-49bd-a65b-52ff7ee6fe00" (UID: "f0501ada-a122-49bd-a65b-52ff7ee6fe00"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:19:21 crc kubenswrapper[4877]: I1211 18:19:21.948557 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0501ada-a122-49bd-a65b-52ff7ee6fe00-kube-api-access-rc8nt" (OuterVolumeSpecName: "kube-api-access-rc8nt") pod "f0501ada-a122-49bd-a65b-52ff7ee6fe00" (UID: "f0501ada-a122-49bd-a65b-52ff7ee6fe00"). InnerVolumeSpecName "kube-api-access-rc8nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:22 crc kubenswrapper[4877]: I1211 18:19:22.025295 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0501ada-a122-49bd-a65b-52ff7ee6fe00-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:22 crc kubenswrapper[4877]: I1211 18:19:22.025329 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc8nt\" (UniqueName: \"kubernetes.io/projected/f0501ada-a122-49bd-a65b-52ff7ee6fe00-kube-api-access-rc8nt\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:22 crc kubenswrapper[4877]: I1211 18:19:22.301343 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6qkp" Dec 11 18:19:22 crc kubenswrapper[4877]: I1211 18:19:22.302158 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6qkp" event={"ID":"f0501ada-a122-49bd-a65b-52ff7ee6fe00","Type":"ContainerDied","Data":"74ad1d85066df676b9fc0901ef3ffadc9a7999a51856496c5fb8b730f16e3512"} Dec 11 18:19:22 crc kubenswrapper[4877]: I1211 18:19:22.302212 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74ad1d85066df676b9fc0901ef3ffadc9a7999a51856496c5fb8b730f16e3512" Dec 11 18:19:22 crc kubenswrapper[4877]: I1211 18:19:22.337875 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11c62194-8ad1-4529-98d8-7ad070a3ac30","Type":"ContainerStarted","Data":"6cf2c0c13d0d0df5b1f0e38a2042976a7cae0881f4c6f70edb27206e908ccf07"} Dec 11 18:19:22 crc kubenswrapper[4877]: I1211 18:19:22.345629 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a8011cc1-2d08-433e-bc2b-71f11aa75cd2","Type":"ContainerStarted","Data":"10fb36ae70ba0e72ecaa4a1917076e9c8f6fc0feab595e66a2f1941e7cecd26d"} Dec 11 18:19:22 crc kubenswrapper[4877]: I1211 18:19:22.360014 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fcf8e591-86a6-4c17-89a0-9d93ec7bb590","Type":"ContainerStarted","Data":"45449b0bcc3280a2aeec915e52f444661a15b8dbc3654d547a440107f62a2777"} Dec 11 18:19:22 crc kubenswrapper[4877]: I1211 18:19:22.360080 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 11 18:19:22 crc kubenswrapper[4877]: I1211 18:19:22.411169 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.411148657 podStartE2EDuration="4.411148657s" podCreationTimestamp="2025-12-11 18:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:19:22.375000268 +0000 UTC m=+1123.401244322" watchObservedRunningTime="2025-12-11 18:19:22.411148657 +0000 UTC m=+1123.437392701" Dec 11 18:19:22 crc kubenswrapper[4877]: I1211 18:19:22.426794 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.426770322 podStartE2EDuration="7.426770322s" podCreationTimestamp="2025-12-11 18:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:19:22.418838041 +0000 UTC m=+1123.445082085" watchObservedRunningTime="2025-12-11 18:19:22.426770322 +0000 UTC m=+1123.453014366" Dec 11 18:19:22 crc kubenswrapper[4877]: I1211 18:19:22.457345 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.457302272 podStartE2EDuration="4.457302272s" podCreationTimestamp="2025-12-11 18:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:19:22.450806909 +0000 UTC m=+1123.477050963" watchObservedRunningTime="2025-12-11 18:19:22.457302272 +0000 UTC m=+1123.483546316" Dec 11 18:19:22 crc kubenswrapper[4877]: I1211 18:19:22.922526 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ncpkj" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.055897 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fefe31f3-c374-42c0-9af1-a7e2d095bc6d-operator-scripts\") pod \"fefe31f3-c374-42c0-9af1-a7e2d095bc6d\" (UID: \"fefe31f3-c374-42c0-9af1-a7e2d095bc6d\") " Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.056168 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9254g\" (UniqueName: \"kubernetes.io/projected/fefe31f3-c374-42c0-9af1-a7e2d095bc6d-kube-api-access-9254g\") pod \"fefe31f3-c374-42c0-9af1-a7e2d095bc6d\" (UID: \"fefe31f3-c374-42c0-9af1-a7e2d095bc6d\") " Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.057617 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fefe31f3-c374-42c0-9af1-a7e2d095bc6d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fefe31f3-c374-42c0-9af1-a7e2d095bc6d" (UID: "fefe31f3-c374-42c0-9af1-a7e2d095bc6d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.084286 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fefe31f3-c374-42c0-9af1-a7e2d095bc6d-kube-api-access-9254g" (OuterVolumeSpecName: "kube-api-access-9254g") pod "fefe31f3-c374-42c0-9af1-a7e2d095bc6d" (UID: "fefe31f3-c374-42c0-9af1-a7e2d095bc6d"). InnerVolumeSpecName "kube-api-access-9254g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.158759 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9254g\" (UniqueName: \"kubernetes.io/projected/fefe31f3-c374-42c0-9af1-a7e2d095bc6d-kube-api-access-9254g\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.158791 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fefe31f3-c374-42c0-9af1-a7e2d095bc6d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.271461 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2a65-account-create-update-vf2wg" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.284657 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.302026 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a10b-account-create-update-csxdx" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.362094 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a34fa8b-c5b0-4297-bbc6-609bf82854f7-operator-scripts\") pod \"2a34fa8b-c5b0-4297-bbc6-609bf82854f7\" (UID: \"2a34fa8b-c5b0-4297-bbc6-609bf82854f7\") " Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.362333 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdqsm\" (UniqueName: \"kubernetes.io/projected/2a34fa8b-c5b0-4297-bbc6-609bf82854f7-kube-api-access-jdqsm\") pod \"2a34fa8b-c5b0-4297-bbc6-609bf82854f7\" (UID: \"2a34fa8b-c5b0-4297-bbc6-609bf82854f7\") " Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.364019 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a34fa8b-c5b0-4297-bbc6-609bf82854f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a34fa8b-c5b0-4297-bbc6-609bf82854f7" (UID: "2a34fa8b-c5b0-4297-bbc6-609bf82854f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.368688 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a34fa8b-c5b0-4297-bbc6-609bf82854f7-kube-api-access-jdqsm" (OuterVolumeSpecName: "kube-api-access-jdqsm") pod "2a34fa8b-c5b0-4297-bbc6-609bf82854f7" (UID: "2a34fa8b-c5b0-4297-bbc6-609bf82854f7"). InnerVolumeSpecName "kube-api-access-jdqsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.374061 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2a65-account-create-update-vf2wg" event={"ID":"2a34fa8b-c5b0-4297-bbc6-609bf82854f7","Type":"ContainerDied","Data":"123c1bb3bb51bd13c9e4f76aa821358d89e93b02927c10ab9d23445aeb3cd5af"} Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.374106 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="123c1bb3bb51bd13c9e4f76aa821358d89e93b02927c10ab9d23445aeb3cd5af" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.374194 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2a65-account-create-update-vf2wg" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.375694 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ncpkj" event={"ID":"fefe31f3-c374-42c0-9af1-a7e2d095bc6d","Type":"ContainerDied","Data":"d1665894f0f84497681db4316df5e161d8df0eb78ecd5c7f583fd859b9de596f"} Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.375719 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1665894f0f84497681db4316df5e161d8df0eb78ecd5c7f583fd859b9de596f" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.375785 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ncpkj" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.380031 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" event={"ID":"ce805e83-9fac-4e8d-a823-33210302631d","Type":"ContainerDied","Data":"4d7bb2cc94785f5aa6f78b867a67898f1e8e54f49456b0bf2497355d1f4853c3"} Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.380209 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3f44-account-create-update-r8mvs" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.380225 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d7bb2cc94785f5aa6f78b867a67898f1e8e54f49456b0bf2497355d1f4853c3" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.381186 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-44gb8" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.383833 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a10b-account-create-update-csxdx" event={"ID":"feb8b32a-6068-41da-bac0-f13c2a25e815","Type":"ContainerDied","Data":"c3338b4fd32b4b555fa63a909015e496222062b6ec8d787e33027293951f253a"} Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.383935 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3338b4fd32b4b555fa63a909015e496222062b6ec8d787e33027293951f253a" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.384913 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a10b-account-create-update-csxdx" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.464087 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtrjf\" (UniqueName: \"kubernetes.io/projected/feb8b32a-6068-41da-bac0-f13c2a25e815-kube-api-access-xtrjf\") pod \"feb8b32a-6068-41da-bac0-f13c2a25e815\" (UID: \"feb8b32a-6068-41da-bac0-f13c2a25e815\") " Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.464189 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce805e83-9fac-4e8d-a823-33210302631d-operator-scripts\") pod \"ce805e83-9fac-4e8d-a823-33210302631d\" (UID: \"ce805e83-9fac-4e8d-a823-33210302631d\") " Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.464292 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb8b32a-6068-41da-bac0-f13c2a25e815-operator-scripts\") pod \"feb8b32a-6068-41da-bac0-f13c2a25e815\" (UID: \"feb8b32a-6068-41da-bac0-f13c2a25e815\") " Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.464401 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7mnc\" (UniqueName: \"kubernetes.io/projected/ce805e83-9fac-4e8d-a823-33210302631d-kube-api-access-h7mnc\") pod \"ce805e83-9fac-4e8d-a823-33210302631d\" (UID: \"ce805e83-9fac-4e8d-a823-33210302631d\") " Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.464915 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdqsm\" (UniqueName: \"kubernetes.io/projected/2a34fa8b-c5b0-4297-bbc6-609bf82854f7-kube-api-access-jdqsm\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.464931 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a34fa8b-c5b0-4297-bbc6-609bf82854f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.465758 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce805e83-9fac-4e8d-a823-33210302631d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce805e83-9fac-4e8d-a823-33210302631d" (UID: "ce805e83-9fac-4e8d-a823-33210302631d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.465830 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feb8b32a-6068-41da-bac0-f13c2a25e815-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "feb8b32a-6068-41da-bac0-f13c2a25e815" (UID: "feb8b32a-6068-41da-bac0-f13c2a25e815"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.472639 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce805e83-9fac-4e8d-a823-33210302631d-kube-api-access-h7mnc" (OuterVolumeSpecName: "kube-api-access-h7mnc") pod "ce805e83-9fac-4e8d-a823-33210302631d" (UID: "ce805e83-9fac-4e8d-a823-33210302631d"). InnerVolumeSpecName "kube-api-access-h7mnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.473224 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb8b32a-6068-41da-bac0-f13c2a25e815-kube-api-access-xtrjf" (OuterVolumeSpecName: "kube-api-access-xtrjf") pod "feb8b32a-6068-41da-bac0-f13c2a25e815" (UID: "feb8b32a-6068-41da-bac0-f13c2a25e815"). InnerVolumeSpecName "kube-api-access-xtrjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.566317 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b57b46-038c-402a-ab72-36f2870a32fd-operator-scripts\") pod \"18b57b46-038c-402a-ab72-36f2870a32fd\" (UID: \"18b57b46-038c-402a-ab72-36f2870a32fd\") " Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.566713 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sjqc\" (UniqueName: \"kubernetes.io/projected/18b57b46-038c-402a-ab72-36f2870a32fd-kube-api-access-8sjqc\") pod \"18b57b46-038c-402a-ab72-36f2870a32fd\" (UID: \"18b57b46-038c-402a-ab72-36f2870a32fd\") " Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.566963 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b57b46-038c-402a-ab72-36f2870a32fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18b57b46-038c-402a-ab72-36f2870a32fd" (UID: "18b57b46-038c-402a-ab72-36f2870a32fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.567934 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/feb8b32a-6068-41da-bac0-f13c2a25e815-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.567958 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7mnc\" (UniqueName: \"kubernetes.io/projected/ce805e83-9fac-4e8d-a823-33210302631d-kube-api-access-h7mnc\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.567972 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b57b46-038c-402a-ab72-36f2870a32fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.567984 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtrjf\" (UniqueName: \"kubernetes.io/projected/feb8b32a-6068-41da-bac0-f13c2a25e815-kube-api-access-xtrjf\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.567997 4877 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce805e83-9fac-4e8d-a823-33210302631d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.571458 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b57b46-038c-402a-ab72-36f2870a32fd-kube-api-access-8sjqc" (OuterVolumeSpecName: "kube-api-access-8sjqc") pod "18b57b46-038c-402a-ab72-36f2870a32fd" (UID: "18b57b46-038c-402a-ab72-36f2870a32fd"). InnerVolumeSpecName "kube-api-access-8sjqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:23 crc kubenswrapper[4877]: I1211 18:19:23.669482 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sjqc\" (UniqueName: \"kubernetes.io/projected/18b57b46-038c-402a-ab72-36f2870a32fd-kube-api-access-8sjqc\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:24 crc kubenswrapper[4877]: I1211 18:19:24.394219 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-44gb8" event={"ID":"18b57b46-038c-402a-ab72-36f2870a32fd","Type":"ContainerDied","Data":"f6218ab36c4599ccfed447d0e9a10d084bbf40f43aea3a7178cba80314903744"} Dec 11 18:19:24 crc kubenswrapper[4877]: I1211 18:19:24.394269 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6218ab36c4599ccfed447d0e9a10d084bbf40f43aea3a7178cba80314903744" Dec 11 18:19:24 crc kubenswrapper[4877]: I1211 18:19:24.394355 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-44gb8" Dec 11 18:19:25 crc kubenswrapper[4877]: I1211 18:19:25.408006 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c8eee2c-7c83-457a-965d-a1dca2a70ea3","Type":"ContainerStarted","Data":"c2f90dd43f4e4998f3b08001f74365fe3043f5f767705bed1ad81e1e904b4a8c"} Dec 11 18:19:25 crc kubenswrapper[4877]: I1211 18:19:25.408505 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 18:19:25 crc kubenswrapper[4877]: I1211 18:19:25.408298 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="ceilometer-central-agent" containerID="cri-o://a1b69f9d32af0c6af5e1c21eb71e5ae1cab494d320aee97bf72442273144c003" gracePeriod=30 Dec 11 18:19:25 crc kubenswrapper[4877]: I1211 18:19:25.408495 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="sg-core" containerID="cri-o://1533dfbe8607f32046e6667a9bcb6b8661fa33cbee2b70d3fd224a449b035165" gracePeriod=30 Dec 11 18:19:25 crc kubenswrapper[4877]: I1211 18:19:25.408590 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="proxy-httpd" containerID="cri-o://c2f90dd43f4e4998f3b08001f74365fe3043f5f767705bed1ad81e1e904b4a8c" gracePeriod=30 Dec 11 18:19:25 crc kubenswrapper[4877]: I1211 18:19:25.408461 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="ceilometer-notification-agent" containerID="cri-o://3d315bcc22130fd381e261fa9e4d7b9a436e7eaa25ec0407f295c53702393775" gracePeriod=30 Dec 11 18:19:25 crc kubenswrapper[4877]: I1211 18:19:25.443061 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.435277318 podStartE2EDuration="20.443034895s" podCreationTimestamp="2025-12-11 18:19:05 +0000 UTC" firstStartedPulling="2025-12-11 18:19:06.054124398 +0000 UTC m=+1107.080368442" lastFinishedPulling="2025-12-11 18:19:25.061881975 +0000 UTC m=+1126.088126019" observedRunningTime="2025-12-11 18:19:25.43416327 +0000 UTC m=+1126.460407324" watchObservedRunningTime="2025-12-11 18:19:25.443034895 +0000 UTC m=+1126.469278939" Dec 11 18:19:25 crc kubenswrapper[4877]: I1211 18:19:25.900431 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.019169 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-combined-ca-bundle\") pod \"7e767786-f0b1-4dae-b7c5-fd1e00046935\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.019323 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctpsl\" (UniqueName: \"kubernetes.io/projected/7e767786-f0b1-4dae-b7c5-fd1e00046935-kube-api-access-ctpsl\") pod \"7e767786-f0b1-4dae-b7c5-fd1e00046935\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.019411 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-horizon-secret-key\") pod \"7e767786-f0b1-4dae-b7c5-fd1e00046935\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.019551 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-horizon-tls-certs\") pod \"7e767786-f0b1-4dae-b7c5-fd1e00046935\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.019581 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e767786-f0b1-4dae-b7c5-fd1e00046935-logs\") pod \"7e767786-f0b1-4dae-b7c5-fd1e00046935\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.019735 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e767786-f0b1-4dae-b7c5-fd1e00046935-scripts\") pod \"7e767786-f0b1-4dae-b7c5-fd1e00046935\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.019829 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e767786-f0b1-4dae-b7c5-fd1e00046935-config-data\") pod \"7e767786-f0b1-4dae-b7c5-fd1e00046935\" (UID: \"7e767786-f0b1-4dae-b7c5-fd1e00046935\") " Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.022547 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e767786-f0b1-4dae-b7c5-fd1e00046935-logs" (OuterVolumeSpecName: "logs") pod "7e767786-f0b1-4dae-b7c5-fd1e00046935" (UID: "7e767786-f0b1-4dae-b7c5-fd1e00046935"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.028017 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e767786-f0b1-4dae-b7c5-fd1e00046935-kube-api-access-ctpsl" (OuterVolumeSpecName: "kube-api-access-ctpsl") pod "7e767786-f0b1-4dae-b7c5-fd1e00046935" (UID: "7e767786-f0b1-4dae-b7c5-fd1e00046935"). InnerVolumeSpecName "kube-api-access-ctpsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.029024 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7e767786-f0b1-4dae-b7c5-fd1e00046935" (UID: "7e767786-f0b1-4dae-b7c5-fd1e00046935"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.046395 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e767786-f0b1-4dae-b7c5-fd1e00046935-config-data" (OuterVolumeSpecName: "config-data") pod "7e767786-f0b1-4dae-b7c5-fd1e00046935" (UID: "7e767786-f0b1-4dae-b7c5-fd1e00046935"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.048405 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e767786-f0b1-4dae-b7c5-fd1e00046935-scripts" (OuterVolumeSpecName: "scripts") pod "7e767786-f0b1-4dae-b7c5-fd1e00046935" (UID: "7e767786-f0b1-4dae-b7c5-fd1e00046935"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.049994 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e767786-f0b1-4dae-b7c5-fd1e00046935" (UID: "7e767786-f0b1-4dae-b7c5-fd1e00046935"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.089547 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "7e767786-f0b1-4dae-b7c5-fd1e00046935" (UID: "7e767786-f0b1-4dae-b7c5-fd1e00046935"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.122737 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.122791 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctpsl\" (UniqueName: \"kubernetes.io/projected/7e767786-f0b1-4dae-b7c5-fd1e00046935-kube-api-access-ctpsl\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.122807 4877 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.122819 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e767786-f0b1-4dae-b7c5-fd1e00046935-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.122834 4877 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e767786-f0b1-4dae-b7c5-fd1e00046935-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.122845 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e767786-f0b1-4dae-b7c5-fd1e00046935-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.122854 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e767786-f0b1-4dae-b7c5-fd1e00046935-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.437040 4877 generic.go:334] "Generic (PLEG): container finished" podID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerID="c2f90dd43f4e4998f3b08001f74365fe3043f5f767705bed1ad81e1e904b4a8c" exitCode=0 Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.437078 4877 generic.go:334] "Generic (PLEG): container finished" podID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerID="1533dfbe8607f32046e6667a9bcb6b8661fa33cbee2b70d3fd224a449b035165" exitCode=2 Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.437089 4877 generic.go:334] "Generic (PLEG): container finished" podID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerID="a1b69f9d32af0c6af5e1c21eb71e5ae1cab494d320aee97bf72442273144c003" exitCode=0 Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.437086 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c8eee2c-7c83-457a-965d-a1dca2a70ea3","Type":"ContainerDied","Data":"c2f90dd43f4e4998f3b08001f74365fe3043f5f767705bed1ad81e1e904b4a8c"} Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.437158 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c8eee2c-7c83-457a-965d-a1dca2a70ea3","Type":"ContainerDied","Data":"1533dfbe8607f32046e6667a9bcb6b8661fa33cbee2b70d3fd224a449b035165"} Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.437175 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c8eee2c-7c83-457a-965d-a1dca2a70ea3","Type":"ContainerDied","Data":"a1b69f9d32af0c6af5e1c21eb71e5ae1cab494d320aee97bf72442273144c003"} Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.440713 4877 generic.go:334] "Generic (PLEG): container finished" podID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerID="0cd4e0aa09f476a21016a9718fd1e71baff6f63888026e4059bb942c93702405" exitCode=137 Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.440749 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5757846754-b6r7j" event={"ID":"7e767786-f0b1-4dae-b7c5-fd1e00046935","Type":"ContainerDied","Data":"0cd4e0aa09f476a21016a9718fd1e71baff6f63888026e4059bb942c93702405"} Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.440777 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5757846754-b6r7j" event={"ID":"7e767786-f0b1-4dae-b7c5-fd1e00046935","Type":"ContainerDied","Data":"a1772923300b0253d85948074064f66593bae2fb2e333c2158ad6228088def69"} Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.440821 4877 scope.go:117] "RemoveContainer" containerID="8e764b5cee6f743b749ace4261633961a405255c20519fae3935d10c8dd58b51" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.440854 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5757846754-b6r7j" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.487526 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5757846754-b6r7j"] Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.506173 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5757846754-b6r7j"] Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.648523 4877 scope.go:117] "RemoveContainer" containerID="0cd4e0aa09f476a21016a9718fd1e71baff6f63888026e4059bb942c93702405" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.672230 4877 scope.go:117] "RemoveContainer" containerID="8e764b5cee6f743b749ace4261633961a405255c20519fae3935d10c8dd58b51" Dec 11 18:19:26 crc kubenswrapper[4877]: E1211 18:19:26.673145 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e764b5cee6f743b749ace4261633961a405255c20519fae3935d10c8dd58b51\": container with ID starting with 8e764b5cee6f743b749ace4261633961a405255c20519fae3935d10c8dd58b51 not found: ID does not exist" containerID="8e764b5cee6f743b749ace4261633961a405255c20519fae3935d10c8dd58b51" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.673185 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e764b5cee6f743b749ace4261633961a405255c20519fae3935d10c8dd58b51"} err="failed to get container status \"8e764b5cee6f743b749ace4261633961a405255c20519fae3935d10c8dd58b51\": rpc error: code = NotFound desc = could not find container \"8e764b5cee6f743b749ace4261633961a405255c20519fae3935d10c8dd58b51\": container with ID starting with 8e764b5cee6f743b749ace4261633961a405255c20519fae3935d10c8dd58b51 not found: ID does not exist" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.673212 4877 scope.go:117] "RemoveContainer" containerID="0cd4e0aa09f476a21016a9718fd1e71baff6f63888026e4059bb942c93702405" Dec 11 18:19:26 crc kubenswrapper[4877]: E1211 18:19:26.673521 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cd4e0aa09f476a21016a9718fd1e71baff6f63888026e4059bb942c93702405\": container with ID starting with 0cd4e0aa09f476a21016a9718fd1e71baff6f63888026e4059bb942c93702405 not found: ID does not exist" containerID="0cd4e0aa09f476a21016a9718fd1e71baff6f63888026e4059bb942c93702405" Dec 11 18:19:26 crc kubenswrapper[4877]: I1211 18:19:26.673544 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cd4e0aa09f476a21016a9718fd1e71baff6f63888026e4059bb942c93702405"} err="failed to get container status \"0cd4e0aa09f476a21016a9718fd1e71baff6f63888026e4059bb942c93702405\": rpc error: code = NotFound desc = could not find container \"0cd4e0aa09f476a21016a9718fd1e71baff6f63888026e4059bb942c93702405\": container with ID starting with 0cd4e0aa09f476a21016a9718fd1e71baff6f63888026e4059bb942c93702405 not found: ID does not exist" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.011874 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rc8vk"] Dec 11 18:19:27 crc kubenswrapper[4877]: E1211 18:19:27.012288 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefe31f3-c374-42c0-9af1-a7e2d095bc6d" containerName="mariadb-database-create" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012305 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefe31f3-c374-42c0-9af1-a7e2d095bc6d" containerName="mariadb-database-create" Dec 11 18:19:27 crc kubenswrapper[4877]: E1211 18:19:27.012319 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a34fa8b-c5b0-4297-bbc6-609bf82854f7" containerName="mariadb-account-create-update" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012325 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a34fa8b-c5b0-4297-bbc6-609bf82854f7" containerName="mariadb-account-create-update" Dec 11 18:19:27 crc kubenswrapper[4877]: E1211 18:19:27.012333 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce805e83-9fac-4e8d-a823-33210302631d" containerName="mariadb-account-create-update" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012339 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce805e83-9fac-4e8d-a823-33210302631d" containerName="mariadb-account-create-update" Dec 11 18:19:27 crc kubenswrapper[4877]: E1211 18:19:27.012354 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0501ada-a122-49bd-a65b-52ff7ee6fe00" containerName="mariadb-database-create" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012360 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0501ada-a122-49bd-a65b-52ff7ee6fe00" containerName="mariadb-database-create" Dec 11 18:19:27 crc kubenswrapper[4877]: E1211 18:19:27.012386 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b57b46-038c-402a-ab72-36f2870a32fd" containerName="mariadb-database-create" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012392 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b57b46-038c-402a-ab72-36f2870a32fd" containerName="mariadb-database-create" Dec 11 18:19:27 crc kubenswrapper[4877]: E1211 18:19:27.012413 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb8b32a-6068-41da-bac0-f13c2a25e815" containerName="mariadb-account-create-update" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012423 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb8b32a-6068-41da-bac0-f13c2a25e815" containerName="mariadb-account-create-update" Dec 11 18:19:27 crc kubenswrapper[4877]: E1211 18:19:27.012437 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerName="horizon" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012443 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerName="horizon" Dec 11 18:19:27 crc kubenswrapper[4877]: E1211 18:19:27.012463 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerName="horizon-log" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012469 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerName="horizon-log" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012625 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb8b32a-6068-41da-bac0-f13c2a25e815" containerName="mariadb-account-create-update" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012645 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="fefe31f3-c374-42c0-9af1-a7e2d095bc6d" containerName="mariadb-database-create" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012655 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a34fa8b-c5b0-4297-bbc6-609bf82854f7" containerName="mariadb-account-create-update" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012664 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerName="horizon-log" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012680 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce805e83-9fac-4e8d-a823-33210302631d" containerName="mariadb-account-create-update" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012693 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b57b46-038c-402a-ab72-36f2870a32fd" containerName="mariadb-database-create" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012711 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0501ada-a122-49bd-a65b-52ff7ee6fe00" containerName="mariadb-database-create" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.012723 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e767786-f0b1-4dae-b7c5-fd1e00046935" containerName="horizon" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.013392 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.021413 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.025905 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.026204 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gvh5p" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.039716 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rc8vk"] Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.142741 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-scripts\") pod \"nova-cell0-conductor-db-sync-rc8vk\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.142827 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rc8vk\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.143074 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2wpj\" (UniqueName: \"kubernetes.io/projected/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-kube-api-access-w2wpj\") pod \"nova-cell0-conductor-db-sync-rc8vk\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.143215 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-config-data\") pod \"nova-cell0-conductor-db-sync-rc8vk\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.229910 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e767786-f0b1-4dae-b7c5-fd1e00046935" path="/var/lib/kubelet/pods/7e767786-f0b1-4dae-b7c5-fd1e00046935/volumes" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.244989 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-scripts\") pod \"nova-cell0-conductor-db-sync-rc8vk\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.245072 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rc8vk\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.245168 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2wpj\" (UniqueName: \"kubernetes.io/projected/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-kube-api-access-w2wpj\") pod \"nova-cell0-conductor-db-sync-rc8vk\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.245222 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-config-data\") pod \"nova-cell0-conductor-db-sync-rc8vk\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.252243 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-config-data\") pod \"nova-cell0-conductor-db-sync-rc8vk\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.268251 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rc8vk\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.271894 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-scripts\") pod \"nova-cell0-conductor-db-sync-rc8vk\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.288358 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2wpj\" (UniqueName: \"kubernetes.io/projected/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-kube-api-access-w2wpj\") pod \"nova-cell0-conductor-db-sync-rc8vk\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.343968 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:27 crc kubenswrapper[4877]: I1211 18:19:27.938959 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rc8vk"] Dec 11 18:19:28 crc kubenswrapper[4877]: I1211 18:19:28.472696 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rc8vk" event={"ID":"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80","Type":"ContainerStarted","Data":"6e39c2c1d468b1353c1dd3d10a896880e24820d77fc3a2a57ef2b42ab0dd2bfd"} Dec 11 18:19:28 crc kubenswrapper[4877]: I1211 18:19:28.510278 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 18:19:28 crc kubenswrapper[4877]: I1211 18:19:28.510351 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 18:19:28 crc kubenswrapper[4877]: I1211 18:19:28.553512 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 18:19:28 crc kubenswrapper[4877]: I1211 18:19:28.561187 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 18:19:28 crc kubenswrapper[4877]: I1211 18:19:28.905681 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 18:19:28 crc kubenswrapper[4877]: I1211 18:19:28.906487 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 18:19:28 crc kubenswrapper[4877]: I1211 18:19:28.962318 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 18:19:28 crc kubenswrapper[4877]: I1211 18:19:28.965677 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 18:19:29 crc kubenswrapper[4877]: I1211 18:19:29.485933 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 18:19:29 crc kubenswrapper[4877]: I1211 18:19:29.485986 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 18:19:29 crc kubenswrapper[4877]: I1211 18:19:29.486000 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 18:19:29 crc kubenswrapper[4877]: I1211 18:19:29.486011 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 18:19:29 crc kubenswrapper[4877]: I1211 18:19:29.696922 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:29 crc kubenswrapper[4877]: I1211 18:19:29.716133 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-79885c8c-7qj69" Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.528533 4877 generic.go:334] "Generic (PLEG): container finished" podID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerID="3d315bcc22130fd381e261fa9e4d7b9a436e7eaa25ec0407f295c53702393775" exitCode=0 Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.529754 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c8eee2c-7c83-457a-965d-a1dca2a70ea3","Type":"ContainerDied","Data":"3d315bcc22130fd381e261fa9e4d7b9a436e7eaa25ec0407f295c53702393775"} Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.658708 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.847180 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-config-data\") pod \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.848560 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-scripts\") pod \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.848689 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96vjm\" (UniqueName: \"kubernetes.io/projected/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-kube-api-access-96vjm\") pod \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.848731 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-combined-ca-bundle\") pod \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.848774 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-run-httpd\") pod \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.848810 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-sg-core-conf-yaml\") pod \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.849079 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-log-httpd\") pod \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\" (UID: \"4c8eee2c-7c83-457a-965d-a1dca2a70ea3\") " Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.850488 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4c8eee2c-7c83-457a-965d-a1dca2a70ea3" (UID: "4c8eee2c-7c83-457a-965d-a1dca2a70ea3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.850803 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4c8eee2c-7c83-457a-965d-a1dca2a70ea3" (UID: "4c8eee2c-7c83-457a-965d-a1dca2a70ea3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.870450 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-kube-api-access-96vjm" (OuterVolumeSpecName: "kube-api-access-96vjm") pod "4c8eee2c-7c83-457a-965d-a1dca2a70ea3" (UID: "4c8eee2c-7c83-457a-965d-a1dca2a70ea3"). InnerVolumeSpecName "kube-api-access-96vjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.908803 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-scripts" (OuterVolumeSpecName: "scripts") pod "4c8eee2c-7c83-457a-965d-a1dca2a70ea3" (UID: "4c8eee2c-7c83-457a-965d-a1dca2a70ea3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.953246 4877 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.953297 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.953312 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96vjm\" (UniqueName: \"kubernetes.io/projected/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-kube-api-access-96vjm\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.953325 4877 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:30 crc kubenswrapper[4877]: I1211 18:19:30.988122 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4c8eee2c-7c83-457a-965d-a1dca2a70ea3" (UID: "4c8eee2c-7c83-457a-965d-a1dca2a70ea3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.018506 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c8eee2c-7c83-457a-965d-a1dca2a70ea3" (UID: "4c8eee2c-7c83-457a-965d-a1dca2a70ea3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.042537 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-config-data" (OuterVolumeSpecName: "config-data") pod "4c8eee2c-7c83-457a-965d-a1dca2a70ea3" (UID: "4c8eee2c-7c83-457a-965d-a1dca2a70ea3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.056141 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.056175 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.056187 4877 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c8eee2c-7c83-457a-965d-a1dca2a70ea3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.549053 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c8eee2c-7c83-457a-965d-a1dca2a70ea3","Type":"ContainerDied","Data":"28ce5852c00a0281d69f4481a9d304469afb7e48848a3878fa38edae977e9c62"} Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.549127 4877 scope.go:117] "RemoveContainer" containerID="c2f90dd43f4e4998f3b08001f74365fe3043f5f767705bed1ad81e1e904b4a8c" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.549162 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.601867 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.609416 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.618164 4877 scope.go:117] "RemoveContainer" containerID="1533dfbe8607f32046e6667a9bcb6b8661fa33cbee2b70d3fd224a449b035165" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.627684 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:31 crc kubenswrapper[4877]: E1211 18:19:31.628117 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="proxy-httpd" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.628135 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="proxy-httpd" Dec 11 18:19:31 crc kubenswrapper[4877]: E1211 18:19:31.628160 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="ceilometer-central-agent" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.628168 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="ceilometer-central-agent" Dec 11 18:19:31 crc kubenswrapper[4877]: E1211 18:19:31.628182 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="sg-core" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.628188 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="sg-core" Dec 11 18:19:31 crc kubenswrapper[4877]: E1211 18:19:31.628203 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="ceilometer-notification-agent" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.628209 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="ceilometer-notification-agent" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.628400 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="sg-core" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.628411 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="ceilometer-notification-agent" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.628430 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="ceilometer-central-agent" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.628439 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" containerName="proxy-httpd" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.630491 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.639846 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.640220 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.643980 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.697588 4877 scope.go:117] "RemoveContainer" containerID="3d315bcc22130fd381e261fa9e4d7b9a436e7eaa25ec0407f295c53702393775" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.777716 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f06ed7-ca45-4c38-b86d-3f675f82da7c-run-httpd\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.777964 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-scripts\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.778088 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jkhl\" (UniqueName: \"kubernetes.io/projected/93f06ed7-ca45-4c38-b86d-3f675f82da7c-kube-api-access-8jkhl\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.778152 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f06ed7-ca45-4c38-b86d-3f675f82da7c-log-httpd\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.778252 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.778291 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-config-data\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.778326 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.791658 4877 scope.go:117] "RemoveContainer" containerID="a1b69f9d32af0c6af5e1c21eb71e5ae1cab494d320aee97bf72442273144c003" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.879884 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f06ed7-ca45-4c38-b86d-3f675f82da7c-run-httpd\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.879941 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-scripts\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.879979 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jkhl\" (UniqueName: \"kubernetes.io/projected/93f06ed7-ca45-4c38-b86d-3f675f82da7c-kube-api-access-8jkhl\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.880008 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f06ed7-ca45-4c38-b86d-3f675f82da7c-log-httpd\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.880361 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f06ed7-ca45-4c38-b86d-3f675f82da7c-run-httpd\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.880435 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.880478 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-config-data\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.880502 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.880744 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f06ed7-ca45-4c38-b86d-3f675f82da7c-log-httpd\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.887704 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.888135 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-config-data\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.888798 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.892213 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-scripts\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.905740 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jkhl\" (UniqueName: \"kubernetes.io/projected/93f06ed7-ca45-4c38-b86d-3f675f82da7c-kube-api-access-8jkhl\") pod \"ceilometer-0\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " pod="openstack/ceilometer-0" Dec 11 18:19:31 crc kubenswrapper[4877]: I1211 18:19:31.957302 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:19:32 crc kubenswrapper[4877]: I1211 18:19:32.383316 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 18:19:32 crc kubenswrapper[4877]: I1211 18:19:32.383466 4877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 18:19:32 crc kubenswrapper[4877]: I1211 18:19:32.389009 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 18:19:32 crc kubenswrapper[4877]: I1211 18:19:32.558273 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 18:19:32 crc kubenswrapper[4877]: I1211 18:19:32.558406 4877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 18:19:32 crc kubenswrapper[4877]: I1211 18:19:32.575685 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 18:19:32 crc kubenswrapper[4877]: W1211 18:19:32.630532 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93f06ed7_ca45_4c38_b86d_3f675f82da7c.slice/crio-75aadaec0a768021c54ded81fa8475fed15b4af59847fbfab2272ec23d513a57 WatchSource:0}: Error finding container 75aadaec0a768021c54ded81fa8475fed15b4af59847fbfab2272ec23d513a57: Status 404 returned error can't find the container with id 75aadaec0a768021c54ded81fa8475fed15b4af59847fbfab2272ec23d513a57 Dec 11 18:19:32 crc kubenswrapper[4877]: I1211 18:19:32.656507 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:33 crc kubenswrapper[4877]: I1211 18:19:33.230016 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8eee2c-7c83-457a-965d-a1dca2a70ea3" path="/var/lib/kubelet/pods/4c8eee2c-7c83-457a-965d-a1dca2a70ea3/volumes" Dec 11 18:19:33 crc kubenswrapper[4877]: I1211 18:19:33.614762 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 11 18:19:33 crc kubenswrapper[4877]: I1211 18:19:33.616966 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f06ed7-ca45-4c38-b86d-3f675f82da7c","Type":"ContainerStarted","Data":"75aadaec0a768021c54ded81fa8475fed15b4af59847fbfab2272ec23d513a57"} Dec 11 18:19:35 crc kubenswrapper[4877]: I1211 18:19:35.213951 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:35 crc kubenswrapper[4877]: I1211 18:19:35.639882 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f06ed7-ca45-4c38-b86d-3f675f82da7c","Type":"ContainerStarted","Data":"76afc523fd9a8bffd2768c9b33ab3e81d5682becec314ac0e300a39240bfbce2"} Dec 11 18:19:42 crc kubenswrapper[4877]: I1211 18:19:42.758561 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rc8vk" event={"ID":"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80","Type":"ContainerStarted","Data":"28b365018e4f5a1b9661377ca5a8b8408a8a43019f057240e199ef74868df64d"} Dec 11 18:19:42 crc kubenswrapper[4877]: I1211 18:19:42.766898 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f06ed7-ca45-4c38-b86d-3f675f82da7c","Type":"ContainerStarted","Data":"547a45df2c07c1f3d0ecc2fc18e84d5fe27b99423fd2a8e81e57e5aead8e4986"} Dec 11 18:19:42 crc kubenswrapper[4877]: I1211 18:19:42.782347 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rc8vk" podStartSLOduration=2.470844626 podStartE2EDuration="16.782323142s" podCreationTimestamp="2025-12-11 18:19:26 +0000 UTC" firstStartedPulling="2025-12-11 18:19:27.938520844 +0000 UTC m=+1128.964764888" lastFinishedPulling="2025-12-11 18:19:42.24999936 +0000 UTC m=+1143.276243404" observedRunningTime="2025-12-11 18:19:42.779348043 +0000 UTC m=+1143.805592097" watchObservedRunningTime="2025-12-11 18:19:42.782323142 +0000 UTC m=+1143.808567186" Dec 11 18:19:43 crc kubenswrapper[4877]: I1211 18:19:43.779567 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f06ed7-ca45-4c38-b86d-3f675f82da7c","Type":"ContainerStarted","Data":"c558b761253df989f9560ed4dff2312f9f4e94f8546c5aeda57ac1cedede192c"} Dec 11 18:19:45 crc kubenswrapper[4877]: I1211 18:19:45.901583 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f06ed7-ca45-4c38-b86d-3f675f82da7c","Type":"ContainerStarted","Data":"438f4718b5fbcb7955a36d97080c6b1ffa3d8a3f8aa89e79eae9e5aca7e31c36"} Dec 11 18:19:45 crc kubenswrapper[4877]: I1211 18:19:45.902802 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 18:19:45 crc kubenswrapper[4877]: I1211 18:19:45.902448 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="proxy-httpd" containerID="cri-o://438f4718b5fbcb7955a36d97080c6b1ffa3d8a3f8aa89e79eae9e5aca7e31c36" gracePeriod=30 Dec 11 18:19:45 crc kubenswrapper[4877]: I1211 18:19:45.902018 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="ceilometer-central-agent" containerID="cri-o://76afc523fd9a8bffd2768c9b33ab3e81d5682becec314ac0e300a39240bfbce2" gracePeriod=30 Dec 11 18:19:45 crc kubenswrapper[4877]: I1211 18:19:45.902548 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="sg-core" containerID="cri-o://c558b761253df989f9560ed4dff2312f9f4e94f8546c5aeda57ac1cedede192c" gracePeriod=30 Dec 11 18:19:45 crc kubenswrapper[4877]: I1211 18:19:45.902528 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="ceilometer-notification-agent" containerID="cri-o://547a45df2c07c1f3d0ecc2fc18e84d5fe27b99423fd2a8e81e57e5aead8e4986" gracePeriod=30 Dec 11 18:19:45 crc kubenswrapper[4877]: I1211 18:19:45.971682 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.631062022 podStartE2EDuration="14.971653086s" podCreationTimestamp="2025-12-11 18:19:31 +0000 UTC" firstStartedPulling="2025-12-11 18:19:32.644136531 +0000 UTC m=+1133.670380575" lastFinishedPulling="2025-12-11 18:19:44.984727595 +0000 UTC m=+1146.010971639" observedRunningTime="2025-12-11 18:19:45.959990667 +0000 UTC m=+1146.986234971" watchObservedRunningTime="2025-12-11 18:19:45.971653086 +0000 UTC m=+1146.997897130" Dec 11 18:19:46 crc kubenswrapper[4877]: I1211 18:19:46.637764 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:19:46 crc kubenswrapper[4877]: I1211 18:19:46.638228 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:19:46 crc kubenswrapper[4877]: I1211 18:19:46.913341 4877 generic.go:334] "Generic (PLEG): container finished" podID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerID="438f4718b5fbcb7955a36d97080c6b1ffa3d8a3f8aa89e79eae9e5aca7e31c36" exitCode=0 Dec 11 18:19:46 crc kubenswrapper[4877]: I1211 18:19:46.913774 4877 generic.go:334] "Generic (PLEG): container finished" podID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerID="c558b761253df989f9560ed4dff2312f9f4e94f8546c5aeda57ac1cedede192c" exitCode=2 Dec 11 18:19:46 crc kubenswrapper[4877]: I1211 18:19:46.913790 4877 generic.go:334] "Generic (PLEG): container finished" podID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerID="547a45df2c07c1f3d0ecc2fc18e84d5fe27b99423fd2a8e81e57e5aead8e4986" exitCode=0 Dec 11 18:19:46 crc kubenswrapper[4877]: I1211 18:19:46.913419 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f06ed7-ca45-4c38-b86d-3f675f82da7c","Type":"ContainerDied","Data":"438f4718b5fbcb7955a36d97080c6b1ffa3d8a3f8aa89e79eae9e5aca7e31c36"} Dec 11 18:19:46 crc kubenswrapper[4877]: I1211 18:19:46.913877 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f06ed7-ca45-4c38-b86d-3f675f82da7c","Type":"ContainerDied","Data":"c558b761253df989f9560ed4dff2312f9f4e94f8546c5aeda57ac1cedede192c"} Dec 11 18:19:46 crc kubenswrapper[4877]: I1211 18:19:46.913905 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f06ed7-ca45-4c38-b86d-3f675f82da7c","Type":"ContainerDied","Data":"547a45df2c07c1f3d0ecc2fc18e84d5fe27b99423fd2a8e81e57e5aead8e4986"} Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.803844 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.858525 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-sg-core-conf-yaml\") pod \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.858646 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f06ed7-ca45-4c38-b86d-3f675f82da7c-run-httpd\") pod \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.858684 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-scripts\") pod \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.859563 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93f06ed7-ca45-4c38-b86d-3f675f82da7c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "93f06ed7-ca45-4c38-b86d-3f675f82da7c" (UID: "93f06ed7-ca45-4c38-b86d-3f675f82da7c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.859686 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-config-data\") pod \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.859794 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f06ed7-ca45-4c38-b86d-3f675f82da7c-log-httpd\") pod \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.859828 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-combined-ca-bundle\") pod \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.859857 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jkhl\" (UniqueName: \"kubernetes.io/projected/93f06ed7-ca45-4c38-b86d-3f675f82da7c-kube-api-access-8jkhl\") pod \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\" (UID: \"93f06ed7-ca45-4c38-b86d-3f675f82da7c\") " Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.860515 4877 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f06ed7-ca45-4c38-b86d-3f675f82da7c-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.861033 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93f06ed7-ca45-4c38-b86d-3f675f82da7c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "93f06ed7-ca45-4c38-b86d-3f675f82da7c" (UID: "93f06ed7-ca45-4c38-b86d-3f675f82da7c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.866001 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-scripts" (OuterVolumeSpecName: "scripts") pod "93f06ed7-ca45-4c38-b86d-3f675f82da7c" (UID: "93f06ed7-ca45-4c38-b86d-3f675f82da7c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.880792 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f06ed7-ca45-4c38-b86d-3f675f82da7c-kube-api-access-8jkhl" (OuterVolumeSpecName: "kube-api-access-8jkhl") pod "93f06ed7-ca45-4c38-b86d-3f675f82da7c" (UID: "93f06ed7-ca45-4c38-b86d-3f675f82da7c"). InnerVolumeSpecName "kube-api-access-8jkhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.888987 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "93f06ed7-ca45-4c38-b86d-3f675f82da7c" (UID: "93f06ed7-ca45-4c38-b86d-3f675f82da7c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.963575 4877 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93f06ed7-ca45-4c38-b86d-3f675f82da7c-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.963724 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jkhl\" (UniqueName: \"kubernetes.io/projected/93f06ed7-ca45-4c38-b86d-3f675f82da7c-kube-api-access-8jkhl\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.963742 4877 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.963756 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.968410 4877 generic.go:334] "Generic (PLEG): container finished" podID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerID="76afc523fd9a8bffd2768c9b33ab3e81d5682becec314ac0e300a39240bfbce2" exitCode=0 Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.968581 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.972132 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93f06ed7-ca45-4c38-b86d-3f675f82da7c" (UID: "93f06ed7-ca45-4c38-b86d-3f675f82da7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.974259 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f06ed7-ca45-4c38-b86d-3f675f82da7c","Type":"ContainerDied","Data":"76afc523fd9a8bffd2768c9b33ab3e81d5682becec314ac0e300a39240bfbce2"} Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.974436 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93f06ed7-ca45-4c38-b86d-3f675f82da7c","Type":"ContainerDied","Data":"75aadaec0a768021c54ded81fa8475fed15b4af59847fbfab2272ec23d513a57"} Dec 11 18:19:48 crc kubenswrapper[4877]: I1211 18:19:48.974582 4877 scope.go:117] "RemoveContainer" containerID="438f4718b5fbcb7955a36d97080c6b1ffa3d8a3f8aa89e79eae9e5aca7e31c36" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.008658 4877 scope.go:117] "RemoveContainer" containerID="c558b761253df989f9560ed4dff2312f9f4e94f8546c5aeda57ac1cedede192c" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.039087 4877 scope.go:117] "RemoveContainer" containerID="547a45df2c07c1f3d0ecc2fc18e84d5fe27b99423fd2a8e81e57e5aead8e4986" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.073908 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.090668 4877 scope.go:117] "RemoveContainer" containerID="76afc523fd9a8bffd2768c9b33ab3e81d5682becec314ac0e300a39240bfbce2" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.104974 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-config-data" (OuterVolumeSpecName: "config-data") pod "93f06ed7-ca45-4c38-b86d-3f675f82da7c" (UID: "93f06ed7-ca45-4c38-b86d-3f675f82da7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.116838 4877 scope.go:117] "RemoveContainer" containerID="438f4718b5fbcb7955a36d97080c6b1ffa3d8a3f8aa89e79eae9e5aca7e31c36" Dec 11 18:19:49 crc kubenswrapper[4877]: E1211 18:19:49.117617 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"438f4718b5fbcb7955a36d97080c6b1ffa3d8a3f8aa89e79eae9e5aca7e31c36\": container with ID starting with 438f4718b5fbcb7955a36d97080c6b1ffa3d8a3f8aa89e79eae9e5aca7e31c36 not found: ID does not exist" containerID="438f4718b5fbcb7955a36d97080c6b1ffa3d8a3f8aa89e79eae9e5aca7e31c36" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.117696 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"438f4718b5fbcb7955a36d97080c6b1ffa3d8a3f8aa89e79eae9e5aca7e31c36"} err="failed to get container status \"438f4718b5fbcb7955a36d97080c6b1ffa3d8a3f8aa89e79eae9e5aca7e31c36\": rpc error: code = NotFound desc = could not find container \"438f4718b5fbcb7955a36d97080c6b1ffa3d8a3f8aa89e79eae9e5aca7e31c36\": container with ID starting with 438f4718b5fbcb7955a36d97080c6b1ffa3d8a3f8aa89e79eae9e5aca7e31c36 not found: ID does not exist" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.117770 4877 scope.go:117] "RemoveContainer" containerID="c558b761253df989f9560ed4dff2312f9f4e94f8546c5aeda57ac1cedede192c" Dec 11 18:19:49 crc kubenswrapper[4877]: E1211 18:19:49.119863 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c558b761253df989f9560ed4dff2312f9f4e94f8546c5aeda57ac1cedede192c\": container with ID starting with c558b761253df989f9560ed4dff2312f9f4e94f8546c5aeda57ac1cedede192c not found: ID does not exist" containerID="c558b761253df989f9560ed4dff2312f9f4e94f8546c5aeda57ac1cedede192c" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.119925 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c558b761253df989f9560ed4dff2312f9f4e94f8546c5aeda57ac1cedede192c"} err="failed to get container status \"c558b761253df989f9560ed4dff2312f9f4e94f8546c5aeda57ac1cedede192c\": rpc error: code = NotFound desc = could not find container \"c558b761253df989f9560ed4dff2312f9f4e94f8546c5aeda57ac1cedede192c\": container with ID starting with c558b761253df989f9560ed4dff2312f9f4e94f8546c5aeda57ac1cedede192c not found: ID does not exist" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.119966 4877 scope.go:117] "RemoveContainer" containerID="547a45df2c07c1f3d0ecc2fc18e84d5fe27b99423fd2a8e81e57e5aead8e4986" Dec 11 18:19:49 crc kubenswrapper[4877]: E1211 18:19:49.120316 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"547a45df2c07c1f3d0ecc2fc18e84d5fe27b99423fd2a8e81e57e5aead8e4986\": container with ID starting with 547a45df2c07c1f3d0ecc2fc18e84d5fe27b99423fd2a8e81e57e5aead8e4986 not found: ID does not exist" containerID="547a45df2c07c1f3d0ecc2fc18e84d5fe27b99423fd2a8e81e57e5aead8e4986" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.120362 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"547a45df2c07c1f3d0ecc2fc18e84d5fe27b99423fd2a8e81e57e5aead8e4986"} err="failed to get container status \"547a45df2c07c1f3d0ecc2fc18e84d5fe27b99423fd2a8e81e57e5aead8e4986\": rpc error: code = NotFound desc = could not find container \"547a45df2c07c1f3d0ecc2fc18e84d5fe27b99423fd2a8e81e57e5aead8e4986\": container with ID starting with 547a45df2c07c1f3d0ecc2fc18e84d5fe27b99423fd2a8e81e57e5aead8e4986 not found: ID does not exist" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.120410 4877 scope.go:117] "RemoveContainer" containerID="76afc523fd9a8bffd2768c9b33ab3e81d5682becec314ac0e300a39240bfbce2" Dec 11 18:19:49 crc kubenswrapper[4877]: E1211 18:19:49.120800 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76afc523fd9a8bffd2768c9b33ab3e81d5682becec314ac0e300a39240bfbce2\": container with ID starting with 76afc523fd9a8bffd2768c9b33ab3e81d5682becec314ac0e300a39240bfbce2 not found: ID does not exist" containerID="76afc523fd9a8bffd2768c9b33ab3e81d5682becec314ac0e300a39240bfbce2" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.120829 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76afc523fd9a8bffd2768c9b33ab3e81d5682becec314ac0e300a39240bfbce2"} err="failed to get container status \"76afc523fd9a8bffd2768c9b33ab3e81d5682becec314ac0e300a39240bfbce2\": rpc error: code = NotFound desc = could not find container \"76afc523fd9a8bffd2768c9b33ab3e81d5682becec314ac0e300a39240bfbce2\": container with ID starting with 76afc523fd9a8bffd2768c9b33ab3e81d5682becec314ac0e300a39240bfbce2 not found: ID does not exist" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.175234 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f06ed7-ca45-4c38-b86d-3f675f82da7c-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.332548 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.359337 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.386478 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:49 crc kubenswrapper[4877]: E1211 18:19:49.387161 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="sg-core" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.387188 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="sg-core" Dec 11 18:19:49 crc kubenswrapper[4877]: E1211 18:19:49.387208 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="ceilometer-central-agent" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.387217 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="ceilometer-central-agent" Dec 11 18:19:49 crc kubenswrapper[4877]: E1211 18:19:49.387236 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="proxy-httpd" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.387245 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="proxy-httpd" Dec 11 18:19:49 crc kubenswrapper[4877]: E1211 18:19:49.387261 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="ceilometer-notification-agent" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.387270 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="ceilometer-notification-agent" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.387557 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="ceilometer-notification-agent" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.387587 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="sg-core" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.387610 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="proxy-httpd" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.387627 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" containerName="ceilometer-central-agent" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.389889 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.392961 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.393257 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.400681 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.482128 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13910b56-fe2c-426d-859d-edd05d171754-log-httpd\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.482181 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-scripts\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.482210 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13910b56-fe2c-426d-859d-edd05d171754-run-httpd\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.482230 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.482265 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.482305 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xvvj\" (UniqueName: \"kubernetes.io/projected/13910b56-fe2c-426d-859d-edd05d171754-kube-api-access-4xvvj\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.482469 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-config-data\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.585878 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13910b56-fe2c-426d-859d-edd05d171754-log-httpd\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.585066 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13910b56-fe2c-426d-859d-edd05d171754-log-httpd\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.586509 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-scripts\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.586552 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13910b56-fe2c-426d-859d-edd05d171754-run-httpd\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.586585 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.586646 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.586726 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xvvj\" (UniqueName: \"kubernetes.io/projected/13910b56-fe2c-426d-859d-edd05d171754-kube-api-access-4xvvj\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.586812 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-config-data\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.588286 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13910b56-fe2c-426d-859d-edd05d171754-run-httpd\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.591748 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.592588 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.593932 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-config-data\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.611903 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xvvj\" (UniqueName: \"kubernetes.io/projected/13910b56-fe2c-426d-859d-edd05d171754-kube-api-access-4xvvj\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.612288 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-scripts\") pod \"ceilometer-0\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " pod="openstack/ceilometer-0" Dec 11 18:19:49 crc kubenswrapper[4877]: I1211 18:19:49.721112 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:19:50 crc kubenswrapper[4877]: I1211 18:19:50.372664 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:51 crc kubenswrapper[4877]: I1211 18:19:51.014547 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13910b56-fe2c-426d-859d-edd05d171754","Type":"ContainerStarted","Data":"2f6829a9bf746229d59da254a33d1a273ab2a18f40cbe258719b0e6961900ead"} Dec 11 18:19:51 crc kubenswrapper[4877]: I1211 18:19:51.226606 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f06ed7-ca45-4c38-b86d-3f675f82da7c" path="/var/lib/kubelet/pods/93f06ed7-ca45-4c38-b86d-3f675f82da7c/volumes" Dec 11 18:19:52 crc kubenswrapper[4877]: I1211 18:19:52.962612 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:19:53 crc kubenswrapper[4877]: I1211 18:19:53.036417 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13910b56-fe2c-426d-859d-edd05d171754","Type":"ContainerStarted","Data":"846ded3c66b925d36d2b4416a095875ca80014e2bdb316794c8f9a374c118923"} Dec 11 18:19:54 crc kubenswrapper[4877]: I1211 18:19:54.048629 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13910b56-fe2c-426d-859d-edd05d171754","Type":"ContainerStarted","Data":"26d8f1d19a7019b1b8913b64b9601a6a686abcff3bfaa133f4218d0d382006ce"} Dec 11 18:19:55 crc kubenswrapper[4877]: I1211 18:19:55.062394 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13910b56-fe2c-426d-859d-edd05d171754","Type":"ContainerStarted","Data":"07388aa5d8bc2960fe370fed3ee36206fa79cb1722b3dcd01c164ae0e96f6ed3"} Dec 11 18:19:57 crc kubenswrapper[4877]: I1211 18:19:57.091238 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13910b56-fe2c-426d-859d-edd05d171754","Type":"ContainerStarted","Data":"a5941b35d275697ea8c94dcc4004b4e82a2fdf6ed174884d39ea254efa770c44"} Dec 11 18:19:57 crc kubenswrapper[4877]: I1211 18:19:57.092114 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 18:19:57 crc kubenswrapper[4877]: I1211 18:19:57.091578 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="ceilometer-notification-agent" containerID="cri-o://26d8f1d19a7019b1b8913b64b9601a6a686abcff3bfaa133f4218d0d382006ce" gracePeriod=30 Dec 11 18:19:57 crc kubenswrapper[4877]: I1211 18:19:57.091466 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="ceilometer-central-agent" containerID="cri-o://846ded3c66b925d36d2b4416a095875ca80014e2bdb316794c8f9a374c118923" gracePeriod=30 Dec 11 18:19:57 crc kubenswrapper[4877]: I1211 18:19:57.091625 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="proxy-httpd" containerID="cri-o://a5941b35d275697ea8c94dcc4004b4e82a2fdf6ed174884d39ea254efa770c44" gracePeriod=30 Dec 11 18:19:57 crc kubenswrapper[4877]: I1211 18:19:57.091604 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="sg-core" containerID="cri-o://07388aa5d8bc2960fe370fed3ee36206fa79cb1722b3dcd01c164ae0e96f6ed3" gracePeriod=30 Dec 11 18:19:57 crc kubenswrapper[4877]: I1211 18:19:57.130405 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.523426539 podStartE2EDuration="8.130348677s" podCreationTimestamp="2025-12-11 18:19:49 +0000 UTC" firstStartedPulling="2025-12-11 18:19:50.370234227 +0000 UTC m=+1151.396478271" lastFinishedPulling="2025-12-11 18:19:55.977156355 +0000 UTC m=+1157.003400409" observedRunningTime="2025-12-11 18:19:57.12330801 +0000 UTC m=+1158.149552104" watchObservedRunningTime="2025-12-11 18:19:57.130348677 +0000 UTC m=+1158.156592721" Dec 11 18:19:58 crc kubenswrapper[4877]: I1211 18:19:58.147583 4877 generic.go:334] "Generic (PLEG): container finished" podID="13910b56-fe2c-426d-859d-edd05d171754" containerID="a5941b35d275697ea8c94dcc4004b4e82a2fdf6ed174884d39ea254efa770c44" exitCode=0 Dec 11 18:19:58 crc kubenswrapper[4877]: I1211 18:19:58.147662 4877 generic.go:334] "Generic (PLEG): container finished" podID="13910b56-fe2c-426d-859d-edd05d171754" containerID="07388aa5d8bc2960fe370fed3ee36206fa79cb1722b3dcd01c164ae0e96f6ed3" exitCode=2 Dec 11 18:19:58 crc kubenswrapper[4877]: I1211 18:19:58.147680 4877 generic.go:334] "Generic (PLEG): container finished" podID="13910b56-fe2c-426d-859d-edd05d171754" containerID="26d8f1d19a7019b1b8913b64b9601a6a686abcff3bfaa133f4218d0d382006ce" exitCode=0 Dec 11 18:19:58 crc kubenswrapper[4877]: I1211 18:19:58.147693 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13910b56-fe2c-426d-859d-edd05d171754","Type":"ContainerDied","Data":"a5941b35d275697ea8c94dcc4004b4e82a2fdf6ed174884d39ea254efa770c44"} Dec 11 18:19:58 crc kubenswrapper[4877]: I1211 18:19:58.147756 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13910b56-fe2c-426d-859d-edd05d171754","Type":"ContainerDied","Data":"07388aa5d8bc2960fe370fed3ee36206fa79cb1722b3dcd01c164ae0e96f6ed3"} Dec 11 18:19:58 crc kubenswrapper[4877]: I1211 18:19:58.147769 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13910b56-fe2c-426d-859d-edd05d171754","Type":"ContainerDied","Data":"26d8f1d19a7019b1b8913b64b9601a6a686abcff3bfaa133f4218d0d382006ce"} Dec 11 18:19:58 crc kubenswrapper[4877]: I1211 18:19:58.150021 4877 generic.go:334] "Generic (PLEG): container finished" podID="6503cc3e-c36a-4a9d-aced-5c5e4a2fde80" containerID="28b365018e4f5a1b9661377ca5a8b8408a8a43019f057240e199ef74868df64d" exitCode=0 Dec 11 18:19:58 crc kubenswrapper[4877]: I1211 18:19:58.150053 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rc8vk" event={"ID":"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80","Type":"ContainerDied","Data":"28b365018e4f5a1b9661377ca5a8b8408a8a43019f057240e199ef74868df64d"} Dec 11 18:19:59 crc kubenswrapper[4877]: I1211 18:19:59.743875 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:19:59 crc kubenswrapper[4877]: I1211 18:19:59.827036 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-combined-ca-bundle\") pod \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " Dec 11 18:19:59 crc kubenswrapper[4877]: I1211 18:19:59.827194 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2wpj\" (UniqueName: \"kubernetes.io/projected/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-kube-api-access-w2wpj\") pod \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " Dec 11 18:19:59 crc kubenswrapper[4877]: I1211 18:19:59.827299 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-scripts\") pod \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " Dec 11 18:19:59 crc kubenswrapper[4877]: I1211 18:19:59.827582 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-config-data\") pod \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\" (UID: \"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80\") " Dec 11 18:19:59 crc kubenswrapper[4877]: I1211 18:19:59.835237 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-kube-api-access-w2wpj" (OuterVolumeSpecName: "kube-api-access-w2wpj") pod "6503cc3e-c36a-4a9d-aced-5c5e4a2fde80" (UID: "6503cc3e-c36a-4a9d-aced-5c5e4a2fde80"). InnerVolumeSpecName "kube-api-access-w2wpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:19:59 crc kubenswrapper[4877]: I1211 18:19:59.836605 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-scripts" (OuterVolumeSpecName: "scripts") pod "6503cc3e-c36a-4a9d-aced-5c5e4a2fde80" (UID: "6503cc3e-c36a-4a9d-aced-5c5e4a2fde80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:59 crc kubenswrapper[4877]: I1211 18:19:59.861436 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-config-data" (OuterVolumeSpecName: "config-data") pod "6503cc3e-c36a-4a9d-aced-5c5e4a2fde80" (UID: "6503cc3e-c36a-4a9d-aced-5c5e4a2fde80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:59 crc kubenswrapper[4877]: I1211 18:19:59.867308 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6503cc3e-c36a-4a9d-aced-5c5e4a2fde80" (UID: "6503cc3e-c36a-4a9d-aced-5c5e4a2fde80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:19:59 crc kubenswrapper[4877]: I1211 18:19:59.931110 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:59 crc kubenswrapper[4877]: I1211 18:19:59.931153 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:59 crc kubenswrapper[4877]: I1211 18:19:59.931167 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2wpj\" (UniqueName: \"kubernetes.io/projected/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-kube-api-access-w2wpj\") on node \"crc\" DevicePath \"\"" Dec 11 18:19:59 crc kubenswrapper[4877]: I1211 18:19:59.931178 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.176024 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rc8vk" event={"ID":"6503cc3e-c36a-4a9d-aced-5c5e4a2fde80","Type":"ContainerDied","Data":"6e39c2c1d468b1353c1dd3d10a896880e24820d77fc3a2a57ef2b42ab0dd2bfd"} Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.176424 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e39c2c1d468b1353c1dd3d10a896880e24820d77fc3a2a57ef2b42ab0dd2bfd" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.176502 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rc8vk" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.303683 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 18:20:00 crc kubenswrapper[4877]: E1211 18:20:00.304448 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6503cc3e-c36a-4a9d-aced-5c5e4a2fde80" containerName="nova-cell0-conductor-db-sync" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.304490 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="6503cc3e-c36a-4a9d-aced-5c5e4a2fde80" containerName="nova-cell0-conductor-db-sync" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.304734 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="6503cc3e-c36a-4a9d-aced-5c5e4a2fde80" containerName="nova-cell0-conductor-db-sync" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.305635 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.308578 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.308599 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gvh5p" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.316436 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.359995 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlwsg\" (UniqueName: \"kubernetes.io/projected/c8fc48ab-7a71-46a1-9557-65c034c6af7e-kube-api-access-tlwsg\") pod \"nova-cell0-conductor-0\" (UID: \"c8fc48ab-7a71-46a1-9557-65c034c6af7e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.360071 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8fc48ab-7a71-46a1-9557-65c034c6af7e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c8fc48ab-7a71-46a1-9557-65c034c6af7e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.360119 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8fc48ab-7a71-46a1-9557-65c034c6af7e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c8fc48ab-7a71-46a1-9557-65c034c6af7e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.461965 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8fc48ab-7a71-46a1-9557-65c034c6af7e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c8fc48ab-7a71-46a1-9557-65c034c6af7e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.462530 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8fc48ab-7a71-46a1-9557-65c034c6af7e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c8fc48ab-7a71-46a1-9557-65c034c6af7e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.463419 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlwsg\" (UniqueName: \"kubernetes.io/projected/c8fc48ab-7a71-46a1-9557-65c034c6af7e-kube-api-access-tlwsg\") pod \"nova-cell0-conductor-0\" (UID: \"c8fc48ab-7a71-46a1-9557-65c034c6af7e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.467765 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8fc48ab-7a71-46a1-9557-65c034c6af7e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c8fc48ab-7a71-46a1-9557-65c034c6af7e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.470995 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8fc48ab-7a71-46a1-9557-65c034c6af7e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c8fc48ab-7a71-46a1-9557-65c034c6af7e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.490830 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlwsg\" (UniqueName: \"kubernetes.io/projected/c8fc48ab-7a71-46a1-9557-65c034c6af7e-kube-api-access-tlwsg\") pod \"nova-cell0-conductor-0\" (UID: \"c8fc48ab-7a71-46a1-9557-65c034c6af7e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 18:20:00 crc kubenswrapper[4877]: I1211 18:20:00.684172 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 18:20:01 crc kubenswrapper[4877]: I1211 18:20:01.165795 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 18:20:01 crc kubenswrapper[4877]: I1211 18:20:01.195593 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c8fc48ab-7a71-46a1-9557-65c034c6af7e","Type":"ContainerStarted","Data":"c7ddd6f434d3fdaad7a500e6811a112be0797763bf6ef5280f0819ebc6a9a39b"} Dec 11 18:20:02 crc kubenswrapper[4877]: I1211 18:20:02.209083 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c8fc48ab-7a71-46a1-9557-65c034c6af7e","Type":"ContainerStarted","Data":"2beae04ca9cdaf7503bbf27562766d9d2818e7014a65b873709d06d25fd9f844"} Dec 11 18:20:02 crc kubenswrapper[4877]: I1211 18:20:02.210940 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 11 18:20:02 crc kubenswrapper[4877]: I1211 18:20:02.237953 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.237362972 podStartE2EDuration="2.237362972s" podCreationTimestamp="2025-12-11 18:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:20:02.227890381 +0000 UTC m=+1163.254134465" watchObservedRunningTime="2025-12-11 18:20:02.237362972 +0000 UTC m=+1163.263607056" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.232179 4877 generic.go:334] "Generic (PLEG): container finished" podID="13910b56-fe2c-426d-859d-edd05d171754" containerID="846ded3c66b925d36d2b4416a095875ca80014e2bdb316794c8f9a374c118923" exitCode=0 Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.232281 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13910b56-fe2c-426d-859d-edd05d171754","Type":"ContainerDied","Data":"846ded3c66b925d36d2b4416a095875ca80014e2bdb316794c8f9a374c118923"} Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.482439 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.550249 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xvvj\" (UniqueName: \"kubernetes.io/projected/13910b56-fe2c-426d-859d-edd05d171754-kube-api-access-4xvvj\") pod \"13910b56-fe2c-426d-859d-edd05d171754\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.551034 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13910b56-fe2c-426d-859d-edd05d171754-log-httpd\") pod \"13910b56-fe2c-426d-859d-edd05d171754\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.551210 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-scripts\") pod \"13910b56-fe2c-426d-859d-edd05d171754\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.551509 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-config-data\") pod \"13910b56-fe2c-426d-859d-edd05d171754\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.551652 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13910b56-fe2c-426d-859d-edd05d171754-run-httpd\") pod \"13910b56-fe2c-426d-859d-edd05d171754\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.551810 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-sg-core-conf-yaml\") pod \"13910b56-fe2c-426d-859d-edd05d171754\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.551919 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13910b56-fe2c-426d-859d-edd05d171754-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "13910b56-fe2c-426d-859d-edd05d171754" (UID: "13910b56-fe2c-426d-859d-edd05d171754"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.551950 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-combined-ca-bundle\") pod \"13910b56-fe2c-426d-859d-edd05d171754\" (UID: \"13910b56-fe2c-426d-859d-edd05d171754\") " Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.553407 4877 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13910b56-fe2c-426d-859d-edd05d171754-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.555244 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13910b56-fe2c-426d-859d-edd05d171754-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "13910b56-fe2c-426d-859d-edd05d171754" (UID: "13910b56-fe2c-426d-859d-edd05d171754"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.572933 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-scripts" (OuterVolumeSpecName: "scripts") pod "13910b56-fe2c-426d-859d-edd05d171754" (UID: "13910b56-fe2c-426d-859d-edd05d171754"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.596772 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13910b56-fe2c-426d-859d-edd05d171754-kube-api-access-4xvvj" (OuterVolumeSpecName: "kube-api-access-4xvvj") pod "13910b56-fe2c-426d-859d-edd05d171754" (UID: "13910b56-fe2c-426d-859d-edd05d171754"). InnerVolumeSpecName "kube-api-access-4xvvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.661751 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xvvj\" (UniqueName: \"kubernetes.io/projected/13910b56-fe2c-426d-859d-edd05d171754-kube-api-access-4xvvj\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.661794 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.661804 4877 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13910b56-fe2c-426d-859d-edd05d171754-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.663633 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "13910b56-fe2c-426d-859d-edd05d171754" (UID: "13910b56-fe2c-426d-859d-edd05d171754"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.739567 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13910b56-fe2c-426d-859d-edd05d171754" (UID: "13910b56-fe2c-426d-859d-edd05d171754"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.764131 4877 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.764158 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.775133 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-config-data" (OuterVolumeSpecName: "config-data") pod "13910b56-fe2c-426d-859d-edd05d171754" (UID: "13910b56-fe2c-426d-859d-edd05d171754"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:04 crc kubenswrapper[4877]: I1211 18:20:04.866051 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13910b56-fe2c-426d-859d-edd05d171754-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.246601 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13910b56-fe2c-426d-859d-edd05d171754","Type":"ContainerDied","Data":"2f6829a9bf746229d59da254a33d1a273ab2a18f40cbe258719b0e6961900ead"} Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.246667 4877 scope.go:117] "RemoveContainer" containerID="a5941b35d275697ea8c94dcc4004b4e82a2fdf6ed174884d39ea254efa770c44" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.246709 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.270884 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.284390 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.295067 4877 scope.go:117] "RemoveContainer" containerID="07388aa5d8bc2960fe370fed3ee36206fa79cb1722b3dcd01c164ae0e96f6ed3" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.317344 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:05 crc kubenswrapper[4877]: E1211 18:20:05.317875 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="ceilometer-notification-agent" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.317897 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="ceilometer-notification-agent" Dec 11 18:20:05 crc kubenswrapper[4877]: E1211 18:20:05.317912 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="sg-core" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.317918 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="sg-core" Dec 11 18:20:05 crc kubenswrapper[4877]: E1211 18:20:05.317938 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="proxy-httpd" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.317943 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="proxy-httpd" Dec 11 18:20:05 crc kubenswrapper[4877]: E1211 18:20:05.317972 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="ceilometer-central-agent" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.317977 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="ceilometer-central-agent" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.318158 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="ceilometer-central-agent" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.318166 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="ceilometer-notification-agent" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.318175 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="proxy-httpd" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.318188 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="13910b56-fe2c-426d-859d-edd05d171754" containerName="sg-core" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.320075 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.326636 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.326943 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.330643 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.337472 4877 scope.go:117] "RemoveContainer" containerID="26d8f1d19a7019b1b8913b64b9601a6a686abcff3bfaa133f4218d0d382006ce" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.376729 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-config-data\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.376831 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b03f641a-4227-4c56-b913-dcf731f28610-log-httpd\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.376865 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-scripts\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.376917 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b03f641a-4227-4c56-b913-dcf731f28610-run-httpd\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.376982 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.377025 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkhhs\" (UniqueName: \"kubernetes.io/projected/b03f641a-4227-4c56-b913-dcf731f28610-kube-api-access-pkhhs\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.377060 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.405637 4877 scope.go:117] "RemoveContainer" containerID="846ded3c66b925d36d2b4416a095875ca80014e2bdb316794c8f9a374c118923" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.479114 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.479237 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-config-data\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.479496 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b03f641a-4227-4c56-b913-dcf731f28610-log-httpd\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.479564 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-scripts\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.479764 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b03f641a-4227-4c56-b913-dcf731f28610-run-httpd\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.479913 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.480007 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhhs\" (UniqueName: \"kubernetes.io/projected/b03f641a-4227-4c56-b913-dcf731f28610-kube-api-access-pkhhs\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.481541 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b03f641a-4227-4c56-b913-dcf731f28610-run-httpd\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.482763 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b03f641a-4227-4c56-b913-dcf731f28610-log-httpd\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.486167 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.486280 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-scripts\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.491524 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-config-data\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.492450 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.507021 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkhhs\" (UniqueName: \"kubernetes.io/projected/b03f641a-4227-4c56-b913-dcf731f28610-kube-api-access-pkhhs\") pod \"ceilometer-0\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " pod="openstack/ceilometer-0" Dec 11 18:20:05 crc kubenswrapper[4877]: I1211 18:20:05.680317 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:20:06 crc kubenswrapper[4877]: I1211 18:20:06.187697 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:06 crc kubenswrapper[4877]: I1211 18:20:06.259320 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b03f641a-4227-4c56-b913-dcf731f28610","Type":"ContainerStarted","Data":"e058e4c62349e2eb16a42a2610c71f443e0877f3c8fc584dd337ae8a0dcc4797"} Dec 11 18:20:07 crc kubenswrapper[4877]: I1211 18:20:07.227458 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13910b56-fe2c-426d-859d-edd05d171754" path="/var/lib/kubelet/pods/13910b56-fe2c-426d-859d-edd05d171754/volumes" Dec 11 18:20:07 crc kubenswrapper[4877]: I1211 18:20:07.272502 4877 generic.go:334] "Generic (PLEG): container finished" podID="53a860ae-4169-4f47-8ba7-032c96b4be3a" containerID="2994614fe89fdd0cbb9ef815b7ea11d886f0125684a6e917c03f027f429e3d39" exitCode=1 Dec 11 18:20:07 crc kubenswrapper[4877]: I1211 18:20:07.272566 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerDied","Data":"2994614fe89fdd0cbb9ef815b7ea11d886f0125684a6e917c03f027f429e3d39"} Dec 11 18:20:07 crc kubenswrapper[4877]: I1211 18:20:07.272605 4877 scope.go:117] "RemoveContainer" containerID="67127e95e2041d1a7d7ee10db0a8ac49279a0dfa1be2848da530f261425492c9" Dec 11 18:20:07 crc kubenswrapper[4877]: I1211 18:20:07.273779 4877 scope.go:117] "RemoveContainer" containerID="2994614fe89fdd0cbb9ef815b7ea11d886f0125684a6e917c03f027f429e3d39" Dec 11 18:20:07 crc kubenswrapper[4877]: E1211 18:20:07.274041 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:20:07 crc kubenswrapper[4877]: I1211 18:20:07.284802 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b03f641a-4227-4c56-b913-dcf731f28610","Type":"ContainerStarted","Data":"b8ce86310f9383b20d22666dc2645ff77cc86cff71b495c4d53220afc88eed7a"} Dec 11 18:20:09 crc kubenswrapper[4877]: I1211 18:20:09.316311 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b03f641a-4227-4c56-b913-dcf731f28610","Type":"ContainerStarted","Data":"f12abb5035b5b8eaf4bfb73a43f403f2d77be29a5acc9a19019d40356afcda86"} Dec 11 18:20:09 crc kubenswrapper[4877]: I1211 18:20:09.317139 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b03f641a-4227-4c56-b913-dcf731f28610","Type":"ContainerStarted","Data":"46a3804b7a77972978342a65a425b938b23035e2729185d6078e8901f4db7e9c"} Dec 11 18:20:10 crc kubenswrapper[4877]: I1211 18:20:10.756074 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.138513 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.139352 4877 scope.go:117] "RemoveContainer" containerID="2994614fe89fdd0cbb9ef815b7ea11d886f0125684a6e917c03f027f429e3d39" Dec 11 18:20:11 crc kubenswrapper[4877]: E1211 18:20:11.140497 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.746123 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-57l9c"] Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.747463 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.750436 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.751004 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.767459 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-57l9c"] Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.848283 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4zxp\" (UniqueName: \"kubernetes.io/projected/b01deb5d-7223-4abd-8bd9-502ddb0d74df-kube-api-access-j4zxp\") pod \"nova-cell0-cell-mapping-57l9c\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.848795 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-57l9c\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.848832 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-config-data\") pod \"nova-cell0-cell-mapping-57l9c\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.848900 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-scripts\") pod \"nova-cell0-cell-mapping-57l9c\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.951785 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-57l9c\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.952194 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-config-data\") pod \"nova-cell0-cell-mapping-57l9c\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.952420 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-scripts\") pod \"nova-cell0-cell-mapping-57l9c\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.952579 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4zxp\" (UniqueName: \"kubernetes.io/projected/b01deb5d-7223-4abd-8bd9-502ddb0d74df-kube-api-access-j4zxp\") pod \"nova-cell0-cell-mapping-57l9c\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.962898 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-scripts\") pod \"nova-cell0-cell-mapping-57l9c\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.963252 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-config-data\") pod \"nova-cell0-cell-mapping-57l9c\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.974160 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-57l9c\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:11 crc kubenswrapper[4877]: I1211 18:20:11.979935 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4zxp\" (UniqueName: \"kubernetes.io/projected/b01deb5d-7223-4abd-8bd9-502ddb0d74df-kube-api-access-j4zxp\") pod \"nova-cell0-cell-mapping-57l9c\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.071494 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.309352 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-g9c4v"] Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.311695 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.314195 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.314471 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.327835 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-g9c4v"] Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.350351 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b03f641a-4227-4c56-b913-dcf731f28610","Type":"ContainerStarted","Data":"5d0afdae9f6033162dffbbbf62109635e0ac0cf04ca65c662cb3280b57aabc91"} Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.350537 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.360056 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-config-data\") pod \"nova-cell1-conductor-db-sync-g9c4v\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.360132 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl769\" (UniqueName: \"kubernetes.io/projected/ffb9ba5f-800f-4449-8005-964f8e74f7e5-kube-api-access-gl769\") pod \"nova-cell1-conductor-db-sync-g9c4v\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.360171 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-g9c4v\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.360208 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-scripts\") pod \"nova-cell1-conductor-db-sync-g9c4v\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.384332 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.76717154 podStartE2EDuration="7.384298879s" podCreationTimestamp="2025-12-11 18:20:05 +0000 UTC" firstStartedPulling="2025-12-11 18:20:06.18601935 +0000 UTC m=+1167.212263394" lastFinishedPulling="2025-12-11 18:20:10.803146689 +0000 UTC m=+1171.829390733" observedRunningTime="2025-12-11 18:20:12.372570441 +0000 UTC m=+1173.398814495" watchObservedRunningTime="2025-12-11 18:20:12.384298879 +0000 UTC m=+1173.410542943" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.462978 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-g9c4v\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.463087 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-scripts\") pod \"nova-cell1-conductor-db-sync-g9c4v\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.463241 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-config-data\") pod \"nova-cell1-conductor-db-sync-g9c4v\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.463337 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl769\" (UniqueName: \"kubernetes.io/projected/ffb9ba5f-800f-4449-8005-964f8e74f7e5-kube-api-access-gl769\") pod \"nova-cell1-conductor-db-sync-g9c4v\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.474075 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-scripts\") pod \"nova-cell1-conductor-db-sync-g9c4v\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.474771 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-config-data\") pod \"nova-cell1-conductor-db-sync-g9c4v\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.477022 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-g9c4v\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.484286 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl769\" (UniqueName: \"kubernetes.io/projected/ffb9ba5f-800f-4449-8005-964f8e74f7e5-kube-api-access-gl769\") pod \"nova-cell1-conductor-db-sync-g9c4v\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.602850 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-57l9c"] Dec 11 18:20:12 crc kubenswrapper[4877]: I1211 18:20:12.638039 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:13 crc kubenswrapper[4877]: I1211 18:20:13.127044 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-g9c4v"] Dec 11 18:20:13 crc kubenswrapper[4877]: I1211 18:20:13.360500 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-g9c4v" event={"ID":"ffb9ba5f-800f-4449-8005-964f8e74f7e5","Type":"ContainerStarted","Data":"22f2c64a62827f4c01a71b2b939ebfe279cc6bd8f461a2c591ba397233bcd8d8"} Dec 11 18:20:13 crc kubenswrapper[4877]: I1211 18:20:13.363119 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-57l9c" event={"ID":"b01deb5d-7223-4abd-8bd9-502ddb0d74df","Type":"ContainerStarted","Data":"4c964521ffa0c27b319fdd69ac8c5eeff09323d6c16c8e8544b468f8b940354a"} Dec 11 18:20:13 crc kubenswrapper[4877]: I1211 18:20:13.363193 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-57l9c" event={"ID":"b01deb5d-7223-4abd-8bd9-502ddb0d74df","Type":"ContainerStarted","Data":"502f1bc9c4c67c6e297e3caab51e16b6c0f6f86123a8597d7017d534a32c7d4d"} Dec 11 18:20:13 crc kubenswrapper[4877]: I1211 18:20:13.383979 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-57l9c" podStartSLOduration=2.383934381 podStartE2EDuration="2.383934381s" podCreationTimestamp="2025-12-11 18:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:20:13.381558617 +0000 UTC m=+1174.407802671" watchObservedRunningTime="2025-12-11 18:20:13.383934381 +0000 UTC m=+1174.410178425" Dec 11 18:20:14 crc kubenswrapper[4877]: I1211 18:20:14.378557 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-g9c4v" event={"ID":"ffb9ba5f-800f-4449-8005-964f8e74f7e5","Type":"ContainerStarted","Data":"6d55f177eb699e867a14a3a616aa73d27e9f1ea496f2d91a6de60a3c553307dd"} Dec 11 18:20:14 crc kubenswrapper[4877]: I1211 18:20:14.420271 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-g9c4v" podStartSLOduration=2.420184847 podStartE2EDuration="2.420184847s" podCreationTimestamp="2025-12-11 18:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:20:14.410039542 +0000 UTC m=+1175.436283596" watchObservedRunningTime="2025-12-11 18:20:14.420184847 +0000 UTC m=+1175.446428911" Dec 11 18:20:16 crc kubenswrapper[4877]: I1211 18:20:16.639522 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:20:16 crc kubenswrapper[4877]: I1211 18:20:16.640236 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:20:20 crc kubenswrapper[4877]: I1211 18:20:20.460108 4877 generic.go:334] "Generic (PLEG): container finished" podID="ffb9ba5f-800f-4449-8005-964f8e74f7e5" containerID="6d55f177eb699e867a14a3a616aa73d27e9f1ea496f2d91a6de60a3c553307dd" exitCode=0 Dec 11 18:20:20 crc kubenswrapper[4877]: I1211 18:20:20.460156 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-g9c4v" event={"ID":"ffb9ba5f-800f-4449-8005-964f8e74f7e5","Type":"ContainerDied","Data":"6d55f177eb699e867a14a3a616aa73d27e9f1ea496f2d91a6de60a3c553307dd"} Dec 11 18:20:21 crc kubenswrapper[4877]: I1211 18:20:21.137538 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:20:21 crc kubenswrapper[4877]: I1211 18:20:21.139073 4877 scope.go:117] "RemoveContainer" containerID="2994614fe89fdd0cbb9ef815b7ea11d886f0125684a6e917c03f027f429e3d39" Dec 11 18:20:21 crc kubenswrapper[4877]: I1211 18:20:21.478323 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerStarted","Data":"5828cc3b3843c51140b3bcb3297dd0f7ed118f8aebe644af556ed4341651d5ab"} Dec 11 18:20:21 crc kubenswrapper[4877]: I1211 18:20:21.478691 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:20:21 crc kubenswrapper[4877]: I1211 18:20:21.481131 4877 generic.go:334] "Generic (PLEG): container finished" podID="b01deb5d-7223-4abd-8bd9-502ddb0d74df" containerID="4c964521ffa0c27b319fdd69ac8c5eeff09323d6c16c8e8544b468f8b940354a" exitCode=0 Dec 11 18:20:21 crc kubenswrapper[4877]: I1211 18:20:21.481204 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-57l9c" event={"ID":"b01deb5d-7223-4abd-8bd9-502ddb0d74df","Type":"ContainerDied","Data":"4c964521ffa0c27b319fdd69ac8c5eeff09323d6c16c8e8544b468f8b940354a"} Dec 11 18:20:21 crc kubenswrapper[4877]: I1211 18:20:21.858994 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:21 crc kubenswrapper[4877]: I1211 18:20:21.896403 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl769\" (UniqueName: \"kubernetes.io/projected/ffb9ba5f-800f-4449-8005-964f8e74f7e5-kube-api-access-gl769\") pod \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " Dec 11 18:20:21 crc kubenswrapper[4877]: I1211 18:20:21.896517 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-config-data\") pod \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " Dec 11 18:20:21 crc kubenswrapper[4877]: I1211 18:20:21.896763 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-combined-ca-bundle\") pod \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " Dec 11 18:20:21 crc kubenswrapper[4877]: I1211 18:20:21.896868 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-scripts\") pod \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " Dec 11 18:20:21 crc kubenswrapper[4877]: I1211 18:20:21.904576 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-scripts" (OuterVolumeSpecName: "scripts") pod "ffb9ba5f-800f-4449-8005-964f8e74f7e5" (UID: "ffb9ba5f-800f-4449-8005-964f8e74f7e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:21 crc kubenswrapper[4877]: I1211 18:20:21.909361 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb9ba5f-800f-4449-8005-964f8e74f7e5-kube-api-access-gl769" (OuterVolumeSpecName: "kube-api-access-gl769") pod "ffb9ba5f-800f-4449-8005-964f8e74f7e5" (UID: "ffb9ba5f-800f-4449-8005-964f8e74f7e5"). InnerVolumeSpecName "kube-api-access-gl769". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:20:21 crc kubenswrapper[4877]: E1211 18:20:21.934816 4877 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-config-data podName:ffb9ba5f-800f-4449-8005-964f8e74f7e5 nodeName:}" failed. No retries permitted until 2025-12-11 18:20:22.434774604 +0000 UTC m=+1183.461018658 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-config-data") pod "ffb9ba5f-800f-4449-8005-964f8e74f7e5" (UID: "ffb9ba5f-800f-4449-8005-964f8e74f7e5") : error deleting /var/lib/kubelet/pods/ffb9ba5f-800f-4449-8005-964f8e74f7e5/volume-subpaths: remove /var/lib/kubelet/pods/ffb9ba5f-800f-4449-8005-964f8e74f7e5/volume-subpaths: no such file or directory Dec 11 18:20:21 crc kubenswrapper[4877]: I1211 18:20:21.939016 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffb9ba5f-800f-4449-8005-964f8e74f7e5" (UID: "ffb9ba5f-800f-4449-8005-964f8e74f7e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.000957 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.001016 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.001043 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl769\" (UniqueName: \"kubernetes.io/projected/ffb9ba5f-800f-4449-8005-964f8e74f7e5-kube-api-access-gl769\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.499131 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-g9c4v" event={"ID":"ffb9ba5f-800f-4449-8005-964f8e74f7e5","Type":"ContainerDied","Data":"22f2c64a62827f4c01a71b2b939ebfe279cc6bd8f461a2c591ba397233bcd8d8"} Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.499197 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22f2c64a62827f4c01a71b2b939ebfe279cc6bd8f461a2c591ba397233bcd8d8" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.499298 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-g9c4v" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.511501 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-config-data\") pod \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\" (UID: \"ffb9ba5f-800f-4449-8005-964f8e74f7e5\") " Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.537076 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-config-data" (OuterVolumeSpecName: "config-data") pod "ffb9ba5f-800f-4449-8005-964f8e74f7e5" (UID: "ffb9ba5f-800f-4449-8005-964f8e74f7e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.615092 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb9ba5f-800f-4449-8005-964f8e74f7e5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.622902 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 18:20:22 crc kubenswrapper[4877]: E1211 18:20:22.623458 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb9ba5f-800f-4449-8005-964f8e74f7e5" containerName="nova-cell1-conductor-db-sync" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.623487 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb9ba5f-800f-4449-8005-964f8e74f7e5" containerName="nova-cell1-conductor-db-sync" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.623688 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb9ba5f-800f-4449-8005-964f8e74f7e5" containerName="nova-cell1-conductor-db-sync" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.624517 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.718453 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhktt\" (UniqueName: \"kubernetes.io/projected/6d9e1d47-035b-4789-91ca-61940c628347-kube-api-access-jhktt\") pod \"nova-cell1-conductor-0\" (UID: \"6d9e1d47-035b-4789-91ca-61940c628347\") " pod="openstack/nova-cell1-conductor-0" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.718579 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d9e1d47-035b-4789-91ca-61940c628347-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6d9e1d47-035b-4789-91ca-61940c628347\") " pod="openstack/nova-cell1-conductor-0" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.718689 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d9e1d47-035b-4789-91ca-61940c628347-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6d9e1d47-035b-4789-91ca-61940c628347\") " pod="openstack/nova-cell1-conductor-0" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.719576 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.821117 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhktt\" (UniqueName: \"kubernetes.io/projected/6d9e1d47-035b-4789-91ca-61940c628347-kube-api-access-jhktt\") pod \"nova-cell1-conductor-0\" (UID: \"6d9e1d47-035b-4789-91ca-61940c628347\") " pod="openstack/nova-cell1-conductor-0" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.821240 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d9e1d47-035b-4789-91ca-61940c628347-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6d9e1d47-035b-4789-91ca-61940c628347\") " pod="openstack/nova-cell1-conductor-0" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.821329 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d9e1d47-035b-4789-91ca-61940c628347-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6d9e1d47-035b-4789-91ca-61940c628347\") " pod="openstack/nova-cell1-conductor-0" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.827712 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d9e1d47-035b-4789-91ca-61940c628347-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6d9e1d47-035b-4789-91ca-61940c628347\") " pod="openstack/nova-cell1-conductor-0" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.828173 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d9e1d47-035b-4789-91ca-61940c628347-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6d9e1d47-035b-4789-91ca-61940c628347\") " pod="openstack/nova-cell1-conductor-0" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.845889 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhktt\" (UniqueName: \"kubernetes.io/projected/6d9e1d47-035b-4789-91ca-61940c628347-kube-api-access-jhktt\") pod \"nova-cell1-conductor-0\" (UID: \"6d9e1d47-035b-4789-91ca-61940c628347\") " pod="openstack/nova-cell1-conductor-0" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.965104 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:22 crc kubenswrapper[4877]: I1211 18:20:22.980084 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.026936 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4zxp\" (UniqueName: \"kubernetes.io/projected/b01deb5d-7223-4abd-8bd9-502ddb0d74df-kube-api-access-j4zxp\") pod \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.027056 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-scripts\") pod \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.027164 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-combined-ca-bundle\") pod \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.027312 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-config-data\") pod \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\" (UID: \"b01deb5d-7223-4abd-8bd9-502ddb0d74df\") " Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.032693 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-scripts" (OuterVolumeSpecName: "scripts") pod "b01deb5d-7223-4abd-8bd9-502ddb0d74df" (UID: "b01deb5d-7223-4abd-8bd9-502ddb0d74df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.034268 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b01deb5d-7223-4abd-8bd9-502ddb0d74df-kube-api-access-j4zxp" (OuterVolumeSpecName: "kube-api-access-j4zxp") pod "b01deb5d-7223-4abd-8bd9-502ddb0d74df" (UID: "b01deb5d-7223-4abd-8bd9-502ddb0d74df"). InnerVolumeSpecName "kube-api-access-j4zxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.059971 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b01deb5d-7223-4abd-8bd9-502ddb0d74df" (UID: "b01deb5d-7223-4abd-8bd9-502ddb0d74df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.070922 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-config-data" (OuterVolumeSpecName: "config-data") pod "b01deb5d-7223-4abd-8bd9-502ddb0d74df" (UID: "b01deb5d-7223-4abd-8bd9-502ddb0d74df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.130085 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4zxp\" (UniqueName: \"kubernetes.io/projected/b01deb5d-7223-4abd-8bd9-502ddb0d74df-kube-api-access-j4zxp\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.130551 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.130568 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.130582 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b01deb5d-7223-4abd-8bd9-502ddb0d74df-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:23 crc kubenswrapper[4877]: W1211 18:20:23.456905 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d9e1d47_035b_4789_91ca_61940c628347.slice/crio-12a1272b0a11785eb5b153c4962525bd6f57dd2dbb6bebf87664c9c8008563f2 WatchSource:0}: Error finding container 12a1272b0a11785eb5b153c4962525bd6f57dd2dbb6bebf87664c9c8008563f2: Status 404 returned error can't find the container with id 12a1272b0a11785eb5b153c4962525bd6f57dd2dbb6bebf87664c9c8008563f2 Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.458407 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.514271 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-57l9c" event={"ID":"b01deb5d-7223-4abd-8bd9-502ddb0d74df","Type":"ContainerDied","Data":"502f1bc9c4c67c6e297e3caab51e16b6c0f6f86123a8597d7017d534a32c7d4d"} Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.514323 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-57l9c" Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.514351 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="502f1bc9c4c67c6e297e3caab51e16b6c0f6f86123a8597d7017d534a32c7d4d" Dec 11 18:20:23 crc kubenswrapper[4877]: I1211 18:20:23.516930 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6d9e1d47-035b-4789-91ca-61940c628347","Type":"ContainerStarted","Data":"12a1272b0a11785eb5b153c4962525bd6f57dd2dbb6bebf87664c9c8008563f2"} Dec 11 18:20:24 crc kubenswrapper[4877]: I1211 18:20:24.531035 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6d9e1d47-035b-4789-91ca-61940c628347","Type":"ContainerStarted","Data":"7305d9bcefb801bc543653e7d7630182ee8751cc83a96920649f470e750d7a4c"} Dec 11 18:20:24 crc kubenswrapper[4877]: I1211 18:20:24.531191 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 11 18:20:24 crc kubenswrapper[4877]: I1211 18:20:24.558306 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.558281695 podStartE2EDuration="2.558281695s" podCreationTimestamp="2025-12-11 18:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:20:24.547922684 +0000 UTC m=+1185.574166728" watchObservedRunningTime="2025-12-11 18:20:24.558281695 +0000 UTC m=+1185.584525739" Dec 11 18:20:31 crc kubenswrapper[4877]: I1211 18:20:31.143991 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.618691 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 18:20:32 crc kubenswrapper[4877]: E1211 18:20:32.620726 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01deb5d-7223-4abd-8bd9-502ddb0d74df" containerName="nova-manage" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.620819 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01deb5d-7223-4abd-8bd9-502ddb0d74df" containerName="nova-manage" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.621123 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="b01deb5d-7223-4abd-8bd9-502ddb0d74df" containerName="nova-manage" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.622008 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.626845 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.643792 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.646022 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.652452 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.668973 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.685112 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.750657 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.752620 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.760113 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.762617 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.783770 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x798\" (UniqueName: \"kubernetes.io/projected/d5b22f6a-5ada-4baa-abc8-21359515bb02-kube-api-access-4x798\") pod \"nova-metadata-0\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " pod="openstack/nova-metadata-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.783837 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b22f6a-5ada-4baa-abc8-21359515bb02-config-data\") pod \"nova-metadata-0\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " pod="openstack/nova-metadata-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.783870 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6dfh\" (UniqueName: \"kubernetes.io/projected/58cd2de3-6c98-4325-aaf5-134e9e44638a-kube-api-access-c6dfh\") pod \"nova-scheduler-0\" (UID: \"58cd2de3-6c98-4325-aaf5-134e9e44638a\") " pod="openstack/nova-scheduler-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.783945 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cd2de3-6c98-4325-aaf5-134e9e44638a-config-data\") pod \"nova-scheduler-0\" (UID: \"58cd2de3-6c98-4325-aaf5-134e9e44638a\") " pod="openstack/nova-scheduler-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.783983 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b22f6a-5ada-4baa-abc8-21359515bb02-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " pod="openstack/nova-metadata-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.784011 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cd2de3-6c98-4325-aaf5-134e9e44638a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"58cd2de3-6c98-4325-aaf5-134e9e44638a\") " pod="openstack/nova-scheduler-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.784055 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5b22f6a-5ada-4baa-abc8-21359515bb02-logs\") pod \"nova-metadata-0\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " pod="openstack/nova-metadata-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.824543 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.830328 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.836407 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.861349 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.888730 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.888792 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cd2de3-6c98-4325-aaf5-134e9e44638a-config-data\") pod \"nova-scheduler-0\" (UID: \"58cd2de3-6c98-4325-aaf5-134e9e44638a\") " pod="openstack/nova-scheduler-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.888826 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.888856 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfqxz\" (UniqueName: \"kubernetes.io/projected/6f92e6d6-4834-426a-b69b-5f403074e948-kube-api-access-sfqxz\") pod \"nova-api-0\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " pod="openstack/nova-api-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.888888 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b22f6a-5ada-4baa-abc8-21359515bb02-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " pod="openstack/nova-metadata-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.888913 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfd4x\" (UniqueName: \"kubernetes.io/projected/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-kube-api-access-cfd4x\") pod \"nova-cell1-novncproxy-0\" (UID: \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.888937 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f92e6d6-4834-426a-b69b-5f403074e948-logs\") pod \"nova-api-0\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " pod="openstack/nova-api-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.888959 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cd2de3-6c98-4325-aaf5-134e9e44638a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"58cd2de3-6c98-4325-aaf5-134e9e44638a\") " pod="openstack/nova-scheduler-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.888983 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f92e6d6-4834-426a-b69b-5f403074e948-config-data\") pod \"nova-api-0\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " pod="openstack/nova-api-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.889015 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5b22f6a-5ada-4baa-abc8-21359515bb02-logs\") pod \"nova-metadata-0\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " pod="openstack/nova-metadata-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.889053 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x798\" (UniqueName: \"kubernetes.io/projected/d5b22f6a-5ada-4baa-abc8-21359515bb02-kube-api-access-4x798\") pod \"nova-metadata-0\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " pod="openstack/nova-metadata-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.889154 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b22f6a-5ada-4baa-abc8-21359515bb02-config-data\") pod \"nova-metadata-0\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " pod="openstack/nova-metadata-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.889202 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6dfh\" (UniqueName: \"kubernetes.io/projected/58cd2de3-6c98-4325-aaf5-134e9e44638a-kube-api-access-c6dfh\") pod \"nova-scheduler-0\" (UID: \"58cd2de3-6c98-4325-aaf5-134e9e44638a\") " pod="openstack/nova-scheduler-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.889233 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f92e6d6-4834-426a-b69b-5f403074e948-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " pod="openstack/nova-api-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.900929 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5b22f6a-5ada-4baa-abc8-21359515bb02-logs\") pod \"nova-metadata-0\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " pod="openstack/nova-metadata-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.901388 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cd2de3-6c98-4325-aaf5-134e9e44638a-config-data\") pod \"nova-scheduler-0\" (UID: \"58cd2de3-6c98-4325-aaf5-134e9e44638a\") " pod="openstack/nova-scheduler-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.902767 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cd2de3-6c98-4325-aaf5-134e9e44638a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"58cd2de3-6c98-4325-aaf5-134e9e44638a\") " pod="openstack/nova-scheduler-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.903224 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b22f6a-5ada-4baa-abc8-21359515bb02-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " pod="openstack/nova-metadata-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.903917 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b22f6a-5ada-4baa-abc8-21359515bb02-config-data\") pod \"nova-metadata-0\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " pod="openstack/nova-metadata-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.928831 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6dfh\" (UniqueName: \"kubernetes.io/projected/58cd2de3-6c98-4325-aaf5-134e9e44638a-kube-api-access-c6dfh\") pod \"nova-scheduler-0\" (UID: \"58cd2de3-6c98-4325-aaf5-134e9e44638a\") " pod="openstack/nova-scheduler-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.929272 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x798\" (UniqueName: \"kubernetes.io/projected/d5b22f6a-5ada-4baa-abc8-21359515bb02-kube-api-access-4x798\") pod \"nova-metadata-0\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " pod="openstack/nova-metadata-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.949080 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.991539 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.991594 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.991620 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfqxz\" (UniqueName: \"kubernetes.io/projected/6f92e6d6-4834-426a-b69b-5f403074e948-kube-api-access-sfqxz\") pod \"nova-api-0\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " pod="openstack/nova-api-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.991654 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfd4x\" (UniqueName: \"kubernetes.io/projected/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-kube-api-access-cfd4x\") pod \"nova-cell1-novncproxy-0\" (UID: \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.991672 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f92e6d6-4834-426a-b69b-5f403074e948-logs\") pod \"nova-api-0\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " pod="openstack/nova-api-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.991693 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f92e6d6-4834-426a-b69b-5f403074e948-config-data\") pod \"nova-api-0\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " pod="openstack/nova-api-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.991765 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f92e6d6-4834-426a-b69b-5f403074e948-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " pod="openstack/nova-api-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.994233 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.996121 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f92e6d6-4834-426a-b69b-5f403074e948-logs\") pod \"nova-api-0\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " pod="openstack/nova-api-0" Dec 11 18:20:32 crc kubenswrapper[4877]: I1211 18:20:32.999948 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:20:33 crc kubenswrapper[4877]: I1211 18:20:33.002981 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f92e6d6-4834-426a-b69b-5f403074e948-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " pod="openstack/nova-api-0" Dec 11 18:20:33 crc kubenswrapper[4877]: I1211 18:20:33.003021 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:20:33 crc kubenswrapper[4877]: I1211 18:20:33.004068 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f92e6d6-4834-426a-b69b-5f403074e948-config-data\") pod \"nova-api-0\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " pod="openstack/nova-api-0" Dec 11 18:20:33 crc kubenswrapper[4877]: I1211 18:20:33.014938 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfqxz\" (UniqueName: \"kubernetes.io/projected/6f92e6d6-4834-426a-b69b-5f403074e948-kube-api-access-sfqxz\") pod \"nova-api-0\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " pod="openstack/nova-api-0" Dec 11 18:20:33 crc kubenswrapper[4877]: I1211 18:20:33.015206 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfd4x\" (UniqueName: \"kubernetes.io/projected/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-kube-api-access-cfd4x\") pod \"nova-cell1-novncproxy-0\" (UID: \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:20:33 crc kubenswrapper[4877]: I1211 18:20:33.029176 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 11 18:20:33 crc kubenswrapper[4877]: I1211 18:20:33.081324 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 18:20:33 crc kubenswrapper[4877]: I1211 18:20:33.181285 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:20:33 crc kubenswrapper[4877]: I1211 18:20:33.487722 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 18:20:33 crc kubenswrapper[4877]: I1211 18:20:33.568517 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:20:33 crc kubenswrapper[4877]: I1211 18:20:33.640112 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5b22f6a-5ada-4baa-abc8-21359515bb02","Type":"ContainerStarted","Data":"8e238ff2a74f006f966362f4cc17056c5cd2a6593c1c3322b5f8e2d30efe314c"} Dec 11 18:20:33 crc kubenswrapper[4877]: I1211 18:20:33.641464 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"58cd2de3-6c98-4325-aaf5-134e9e44638a","Type":"ContainerStarted","Data":"9679aeccaf9910ecf87ac463157b6ace81aee3fe031e5dd616c681ce6d4fdc01"} Dec 11 18:20:33 crc kubenswrapper[4877]: I1211 18:20:33.715387 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 18:20:33 crc kubenswrapper[4877]: I1211 18:20:33.725340 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 18:20:33 crc kubenswrapper[4877]: W1211 18:20:33.732085 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbed1dadc_5c80_4e4c_9342_460ceed5a7a2.slice/crio-fb1b49fb80acf4cf7a0e4afbecbafc739dc3eb13ca6ea55dc847a265febc904a WatchSource:0}: Error finding container fb1b49fb80acf4cf7a0e4afbecbafc739dc3eb13ca6ea55dc847a265febc904a: Status 404 returned error can't find the container with id fb1b49fb80acf4cf7a0e4afbecbafc739dc3eb13ca6ea55dc847a265febc904a Dec 11 18:20:34 crc kubenswrapper[4877]: I1211 18:20:34.656169 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6f92e6d6-4834-426a-b69b-5f403074e948","Type":"ContainerStarted","Data":"2e6abcfff9580a2e39df5294de17a3075843f8126d873deb1843c3c4632c4f22"} Dec 11 18:20:34 crc kubenswrapper[4877]: I1211 18:20:34.657765 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bed1dadc-5c80-4e4c-9342-460ceed5a7a2","Type":"ContainerStarted","Data":"fb1b49fb80acf4cf7a0e4afbecbafc739dc3eb13ca6ea55dc847a265febc904a"} Dec 11 18:20:35 crc kubenswrapper[4877]: I1211 18:20:35.689557 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 18:20:36 crc kubenswrapper[4877]: I1211 18:20:36.623105 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:20:36 crc kubenswrapper[4877]: I1211 18:20:36.634789 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 18:20:38 crc kubenswrapper[4877]: I1211 18:20:38.706675 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"58cd2de3-6c98-4325-aaf5-134e9e44638a","Type":"ContainerStarted","Data":"8ef11f597f8e9f5a17257184cd5254befffccb2774ba8b411214c757b3602250"} Dec 11 18:20:38 crc kubenswrapper[4877]: I1211 18:20:38.711303 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5b22f6a-5ada-4baa-abc8-21359515bb02","Type":"ContainerStarted","Data":"ad27f017570361709f76d4bbeece43d30b58f7e7909e17045a13da10bdccd242"} Dec 11 18:20:38 crc kubenswrapper[4877]: I1211 18:20:38.711342 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5b22f6a-5ada-4baa-abc8-21359515bb02","Type":"ContainerStarted","Data":"99355a74e65d8bba6d8b2f9ee0906bd3e9631d80923edb40c639de35005aba25"} Dec 11 18:20:38 crc kubenswrapper[4877]: I1211 18:20:38.711477 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d5b22f6a-5ada-4baa-abc8-21359515bb02" containerName="nova-metadata-log" containerID="cri-o://99355a74e65d8bba6d8b2f9ee0906bd3e9631d80923edb40c639de35005aba25" gracePeriod=30 Dec 11 18:20:38 crc kubenswrapper[4877]: I1211 18:20:38.711594 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d5b22f6a-5ada-4baa-abc8-21359515bb02" containerName="nova-metadata-metadata" containerID="cri-o://ad27f017570361709f76d4bbeece43d30b58f7e7909e17045a13da10bdccd242" gracePeriod=30 Dec 11 18:20:38 crc kubenswrapper[4877]: I1211 18:20:38.714312 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bed1dadc-5c80-4e4c-9342-460ceed5a7a2","Type":"ContainerStarted","Data":"3fdb7f3fe03d2cf69ec426d4a1747288e5b5ba72cee225d642a11a59bdf4df1c"} Dec 11 18:20:38 crc kubenswrapper[4877]: I1211 18:20:38.714347 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="bed1dadc-5c80-4e4c-9342-460ceed5a7a2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://3fdb7f3fe03d2cf69ec426d4a1747288e5b5ba72cee225d642a11a59bdf4df1c" gracePeriod=30 Dec 11 18:20:38 crc kubenswrapper[4877]: I1211 18:20:38.718228 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6f92e6d6-4834-426a-b69b-5f403074e948","Type":"ContainerStarted","Data":"b6490f26460b4fa8fa87e401766c7f70fa476081d02dfffe0d002ffa5500ad82"} Dec 11 18:20:38 crc kubenswrapper[4877]: I1211 18:20:38.718267 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6f92e6d6-4834-426a-b69b-5f403074e948","Type":"ContainerStarted","Data":"8fc94228dd6405a999716b24ad61fd3a8f97377e74924b02e3372528394e9f67"} Dec 11 18:20:38 crc kubenswrapper[4877]: I1211 18:20:38.737678 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.592027291 podStartE2EDuration="6.737659141s" podCreationTimestamp="2025-12-11 18:20:32 +0000 UTC" firstStartedPulling="2025-12-11 18:20:33.492668922 +0000 UTC m=+1194.518912966" lastFinishedPulling="2025-12-11 18:20:37.638300732 +0000 UTC m=+1198.664544816" observedRunningTime="2025-12-11 18:20:38.730358783 +0000 UTC m=+1199.756602827" watchObservedRunningTime="2025-12-11 18:20:38.737659141 +0000 UTC m=+1199.763903185" Dec 11 18:20:38 crc kubenswrapper[4877]: I1211 18:20:38.777440 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.628898241 podStartE2EDuration="6.777420669s" podCreationTimestamp="2025-12-11 18:20:32 +0000 UTC" firstStartedPulling="2025-12-11 18:20:33.728597954 +0000 UTC m=+1194.754841998" lastFinishedPulling="2025-12-11 18:20:37.877120382 +0000 UTC m=+1198.903364426" observedRunningTime="2025-12-11 18:20:38.750461118 +0000 UTC m=+1199.776705162" watchObservedRunningTime="2025-12-11 18:20:38.777420669 +0000 UTC m=+1199.803664713" Dec 11 18:20:38 crc kubenswrapper[4877]: I1211 18:20:38.792878 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.732265126 podStartE2EDuration="6.792852528s" podCreationTimestamp="2025-12-11 18:20:32 +0000 UTC" firstStartedPulling="2025-12-11 18:20:33.576154558 +0000 UTC m=+1194.602398602" lastFinishedPulling="2025-12-11 18:20:37.63674197 +0000 UTC m=+1198.662986004" observedRunningTime="2025-12-11 18:20:38.768419655 +0000 UTC m=+1199.794663699" watchObservedRunningTime="2025-12-11 18:20:38.792852528 +0000 UTC m=+1199.819096582" Dec 11 18:20:38 crc kubenswrapper[4877]: I1211 18:20:38.797094 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.891666361 podStartE2EDuration="6.797081743s" podCreationTimestamp="2025-12-11 18:20:32 +0000 UTC" firstStartedPulling="2025-12-11 18:20:33.736039696 +0000 UTC m=+1194.762283740" lastFinishedPulling="2025-12-11 18:20:37.641455048 +0000 UTC m=+1198.667699122" observedRunningTime="2025-12-11 18:20:38.784892572 +0000 UTC m=+1199.811136616" watchObservedRunningTime="2025-12-11 18:20:38.797081743 +0000 UTC m=+1199.823325787" Dec 11 18:20:39 crc kubenswrapper[4877]: I1211 18:20:39.729851 4877 generic.go:334] "Generic (PLEG): container finished" podID="d5b22f6a-5ada-4baa-abc8-21359515bb02" containerID="99355a74e65d8bba6d8b2f9ee0906bd3e9631d80923edb40c639de35005aba25" exitCode=143 Dec 11 18:20:39 crc kubenswrapper[4877]: I1211 18:20:39.730072 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5b22f6a-5ada-4baa-abc8-21359515bb02","Type":"ContainerDied","Data":"99355a74e65d8bba6d8b2f9ee0906bd3e9631d80923edb40c639de35005aba25"} Dec 11 18:20:39 crc kubenswrapper[4877]: I1211 18:20:39.994515 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 18:20:39 crc kubenswrapper[4877]: I1211 18:20:39.994840 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="304062a6-e2be-499a-a93f-5f439a525e46" containerName="kube-state-metrics" containerID="cri-o://bcbcde474e4085450fc15ea92728b34d62f5d0cd35c19f99ace27a2b507b33de" gracePeriod=30 Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.551902 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.687180 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7mgt\" (UniqueName: \"kubernetes.io/projected/304062a6-e2be-499a-a93f-5f439a525e46-kube-api-access-x7mgt\") pod \"304062a6-e2be-499a-a93f-5f439a525e46\" (UID: \"304062a6-e2be-499a-a93f-5f439a525e46\") " Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.707730 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/304062a6-e2be-499a-a93f-5f439a525e46-kube-api-access-x7mgt" (OuterVolumeSpecName: "kube-api-access-x7mgt") pod "304062a6-e2be-499a-a93f-5f439a525e46" (UID: "304062a6-e2be-499a-a93f-5f439a525e46"). InnerVolumeSpecName "kube-api-access-x7mgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.769831 4877 generic.go:334] "Generic (PLEG): container finished" podID="304062a6-e2be-499a-a93f-5f439a525e46" containerID="bcbcde474e4085450fc15ea92728b34d62f5d0cd35c19f99ace27a2b507b33de" exitCode=2 Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.769904 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"304062a6-e2be-499a-a93f-5f439a525e46","Type":"ContainerDied","Data":"bcbcde474e4085450fc15ea92728b34d62f5d0cd35c19f99ace27a2b507b33de"} Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.769925 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.769956 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"304062a6-e2be-499a-a93f-5f439a525e46","Type":"ContainerDied","Data":"c01780141097dc473143351a6aeade81b11315268122a8fa954bb9212170e345"} Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.769984 4877 scope.go:117] "RemoveContainer" containerID="bcbcde474e4085450fc15ea92728b34d62f5d0cd35c19f99ace27a2b507b33de" Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.796919 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7mgt\" (UniqueName: \"kubernetes.io/projected/304062a6-e2be-499a-a93f-5f439a525e46-kube-api-access-x7mgt\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.808512 4877 scope.go:117] "RemoveContainer" containerID="bcbcde474e4085450fc15ea92728b34d62f5d0cd35c19f99ace27a2b507b33de" Dec 11 18:20:40 crc kubenswrapper[4877]: E1211 18:20:40.811761 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcbcde474e4085450fc15ea92728b34d62f5d0cd35c19f99ace27a2b507b33de\": container with ID starting with bcbcde474e4085450fc15ea92728b34d62f5d0cd35c19f99ace27a2b507b33de not found: ID does not exist" containerID="bcbcde474e4085450fc15ea92728b34d62f5d0cd35c19f99ace27a2b507b33de" Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.811857 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcbcde474e4085450fc15ea92728b34d62f5d0cd35c19f99ace27a2b507b33de"} err="failed to get container status \"bcbcde474e4085450fc15ea92728b34d62f5d0cd35c19f99ace27a2b507b33de\": rpc error: code = NotFound desc = could not find container \"bcbcde474e4085450fc15ea92728b34d62f5d0cd35c19f99ace27a2b507b33de\": container with ID starting with bcbcde474e4085450fc15ea92728b34d62f5d0cd35c19f99ace27a2b507b33de not found: ID does not exist" Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.822638 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.831990 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.882111 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 18:20:40 crc kubenswrapper[4877]: E1211 18:20:40.882684 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="304062a6-e2be-499a-a93f-5f439a525e46" containerName="kube-state-metrics" Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.882702 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="304062a6-e2be-499a-a93f-5f439a525e46" containerName="kube-state-metrics" Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.882896 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="304062a6-e2be-499a-a93f-5f439a525e46" containerName="kube-state-metrics" Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.883676 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.900481 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.902524 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 11 18:20:40 crc kubenswrapper[4877]: I1211 18:20:40.917046 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.007771 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614a50f9-81ab-4bd7-a01d-f8074be6b773-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"614a50f9-81ab-4bd7-a01d-f8074be6b773\") " pod="openstack/kube-state-metrics-0" Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.007837 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t9ds\" (UniqueName: \"kubernetes.io/projected/614a50f9-81ab-4bd7-a01d-f8074be6b773-kube-api-access-4t9ds\") pod \"kube-state-metrics-0\" (UID: \"614a50f9-81ab-4bd7-a01d-f8074be6b773\") " pod="openstack/kube-state-metrics-0" Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.007912 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/614a50f9-81ab-4bd7-a01d-f8074be6b773-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"614a50f9-81ab-4bd7-a01d-f8074be6b773\") " pod="openstack/kube-state-metrics-0" Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.008243 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/614a50f9-81ab-4bd7-a01d-f8074be6b773-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"614a50f9-81ab-4bd7-a01d-f8074be6b773\") " pod="openstack/kube-state-metrics-0" Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.111182 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/614a50f9-81ab-4bd7-a01d-f8074be6b773-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"614a50f9-81ab-4bd7-a01d-f8074be6b773\") " pod="openstack/kube-state-metrics-0" Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.111311 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614a50f9-81ab-4bd7-a01d-f8074be6b773-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"614a50f9-81ab-4bd7-a01d-f8074be6b773\") " pod="openstack/kube-state-metrics-0" Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.111345 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t9ds\" (UniqueName: \"kubernetes.io/projected/614a50f9-81ab-4bd7-a01d-f8074be6b773-kube-api-access-4t9ds\") pod \"kube-state-metrics-0\" (UID: \"614a50f9-81ab-4bd7-a01d-f8074be6b773\") " pod="openstack/kube-state-metrics-0" Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.111424 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/614a50f9-81ab-4bd7-a01d-f8074be6b773-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"614a50f9-81ab-4bd7-a01d-f8074be6b773\") " pod="openstack/kube-state-metrics-0" Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.117581 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/614a50f9-81ab-4bd7-a01d-f8074be6b773-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"614a50f9-81ab-4bd7-a01d-f8074be6b773\") " pod="openstack/kube-state-metrics-0" Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.117735 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614a50f9-81ab-4bd7-a01d-f8074be6b773-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"614a50f9-81ab-4bd7-a01d-f8074be6b773\") " pod="openstack/kube-state-metrics-0" Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.121123 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/614a50f9-81ab-4bd7-a01d-f8074be6b773-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"614a50f9-81ab-4bd7-a01d-f8074be6b773\") " pod="openstack/kube-state-metrics-0" Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.135257 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t9ds\" (UniqueName: \"kubernetes.io/projected/614a50f9-81ab-4bd7-a01d-f8074be6b773-kube-api-access-4t9ds\") pod \"kube-state-metrics-0\" (UID: \"614a50f9-81ab-4bd7-a01d-f8074be6b773\") " pod="openstack/kube-state-metrics-0" Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.221624 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.230434 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="304062a6-e2be-499a-a93f-5f439a525e46" path="/var/lib/kubelet/pods/304062a6-e2be-499a-a93f-5f439a525e46/volumes" Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.711683 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.783355 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"614a50f9-81ab-4bd7-a01d-f8074be6b773","Type":"ContainerStarted","Data":"324f95bc56d8e20b45cde69e646bf5d11ee867e64d087a14ff6efff1ebdcb026"} Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.994119 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.994567 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="ceilometer-central-agent" containerID="cri-o://b8ce86310f9383b20d22666dc2645ff77cc86cff71b495c4d53220afc88eed7a" gracePeriod=30 Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.994714 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="sg-core" containerID="cri-o://f12abb5035b5b8eaf4bfb73a43f403f2d77be29a5acc9a19019d40356afcda86" gracePeriod=30 Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.994720 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="ceilometer-notification-agent" containerID="cri-o://46a3804b7a77972978342a65a425b938b23035e2729185d6078e8901f4db7e9c" gracePeriod=30 Dec 11 18:20:41 crc kubenswrapper[4877]: I1211 18:20:41.994654 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="proxy-httpd" containerID="cri-o://5d0afdae9f6033162dffbbbf62109635e0ac0cf04ca65c662cb3280b57aabc91" gracePeriod=30 Dec 11 18:20:42 crc kubenswrapper[4877]: I1211 18:20:42.794542 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"614a50f9-81ab-4bd7-a01d-f8074be6b773","Type":"ContainerStarted","Data":"ca497fb9a7cf1edfda47e9f210dd2b1d392e32b30546ab86b9bb2f1156e3c339"} Dec 11 18:20:42 crc kubenswrapper[4877]: I1211 18:20:42.796798 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 11 18:20:42 crc kubenswrapper[4877]: I1211 18:20:42.807639 4877 generic.go:334] "Generic (PLEG): container finished" podID="b03f641a-4227-4c56-b913-dcf731f28610" containerID="5d0afdae9f6033162dffbbbf62109635e0ac0cf04ca65c662cb3280b57aabc91" exitCode=0 Dec 11 18:20:42 crc kubenswrapper[4877]: I1211 18:20:42.807676 4877 generic.go:334] "Generic (PLEG): container finished" podID="b03f641a-4227-4c56-b913-dcf731f28610" containerID="f12abb5035b5b8eaf4bfb73a43f403f2d77be29a5acc9a19019d40356afcda86" exitCode=2 Dec 11 18:20:42 crc kubenswrapper[4877]: I1211 18:20:42.807686 4877 generic.go:334] "Generic (PLEG): container finished" podID="b03f641a-4227-4c56-b913-dcf731f28610" containerID="b8ce86310f9383b20d22666dc2645ff77cc86cff71b495c4d53220afc88eed7a" exitCode=0 Dec 11 18:20:42 crc kubenswrapper[4877]: I1211 18:20:42.807715 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b03f641a-4227-4c56-b913-dcf731f28610","Type":"ContainerDied","Data":"5d0afdae9f6033162dffbbbf62109635e0ac0cf04ca65c662cb3280b57aabc91"} Dec 11 18:20:42 crc kubenswrapper[4877]: I1211 18:20:42.807743 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b03f641a-4227-4c56-b913-dcf731f28610","Type":"ContainerDied","Data":"f12abb5035b5b8eaf4bfb73a43f403f2d77be29a5acc9a19019d40356afcda86"} Dec 11 18:20:42 crc kubenswrapper[4877]: I1211 18:20:42.807757 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b03f641a-4227-4c56-b913-dcf731f28610","Type":"ContainerDied","Data":"b8ce86310f9383b20d22666dc2645ff77cc86cff71b495c4d53220afc88eed7a"} Dec 11 18:20:42 crc kubenswrapper[4877]: I1211 18:20:42.815349 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.361876363 podStartE2EDuration="2.815321476s" podCreationTimestamp="2025-12-11 18:20:40 +0000 UTC" firstStartedPulling="2025-12-11 18:20:41.722707011 +0000 UTC m=+1202.748951055" lastFinishedPulling="2025-12-11 18:20:42.176152124 +0000 UTC m=+1203.202396168" observedRunningTime="2025-12-11 18:20:42.813342943 +0000 UTC m=+1203.839587007" watchObservedRunningTime="2025-12-11 18:20:42.815321476 +0000 UTC m=+1203.841565540" Dec 11 18:20:42 crc kubenswrapper[4877]: I1211 18:20:42.949315 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 18:20:42 crc kubenswrapper[4877]: I1211 18:20:42.949363 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 18:20:42 crc kubenswrapper[4877]: I1211 18:20:42.996512 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 18:20:42 crc kubenswrapper[4877]: I1211 18:20:42.996579 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 18:20:43 crc kubenswrapper[4877]: I1211 18:20:43.006249 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 18:20:43 crc kubenswrapper[4877]: I1211 18:20:43.082068 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 18:20:43 crc kubenswrapper[4877]: I1211 18:20:43.082122 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 18:20:43 crc kubenswrapper[4877]: I1211 18:20:43.182311 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:20:44 crc kubenswrapper[4877]: I1211 18:20:44.172679 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6f92e6d6-4834-426a-b69b-5f403074e948" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 18:20:44 crc kubenswrapper[4877]: I1211 18:20:44.173176 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6f92e6d6-4834-426a-b69b-5f403074e948" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 18:20:44 crc kubenswrapper[4877]: I1211 18:20:44.359875 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.753230 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.830782 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b03f641a-4227-4c56-b913-dcf731f28610-log-httpd\") pod \"b03f641a-4227-4c56-b913-dcf731f28610\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.831389 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-config-data\") pod \"b03f641a-4227-4c56-b913-dcf731f28610\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.831533 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-combined-ca-bundle\") pod \"b03f641a-4227-4c56-b913-dcf731f28610\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.831616 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b03f641a-4227-4c56-b913-dcf731f28610-run-httpd\") pod \"b03f641a-4227-4c56-b913-dcf731f28610\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.831744 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-sg-core-conf-yaml\") pod \"b03f641a-4227-4c56-b913-dcf731f28610\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.831827 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b03f641a-4227-4c56-b913-dcf731f28610-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b03f641a-4227-4c56-b913-dcf731f28610" (UID: "b03f641a-4227-4c56-b913-dcf731f28610"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.831845 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkhhs\" (UniqueName: \"kubernetes.io/projected/b03f641a-4227-4c56-b913-dcf731f28610-kube-api-access-pkhhs\") pod \"b03f641a-4227-4c56-b913-dcf731f28610\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.832269 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-scripts\") pod \"b03f641a-4227-4c56-b913-dcf731f28610\" (UID: \"b03f641a-4227-4c56-b913-dcf731f28610\") " Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.834618 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b03f641a-4227-4c56-b913-dcf731f28610-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b03f641a-4227-4c56-b913-dcf731f28610" (UID: "b03f641a-4227-4c56-b913-dcf731f28610"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.841250 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03f641a-4227-4c56-b913-dcf731f28610-kube-api-access-pkhhs" (OuterVolumeSpecName: "kube-api-access-pkhhs") pod "b03f641a-4227-4c56-b913-dcf731f28610" (UID: "b03f641a-4227-4c56-b913-dcf731f28610"). InnerVolumeSpecName "kube-api-access-pkhhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.842085 4877 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b03f641a-4227-4c56-b913-dcf731f28610-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.842140 4877 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b03f641a-4227-4c56-b913-dcf731f28610-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.842154 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkhhs\" (UniqueName: \"kubernetes.io/projected/b03f641a-4227-4c56-b913-dcf731f28610-kube-api-access-pkhhs\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.846495 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-scripts" (OuterVolumeSpecName: "scripts") pod "b03f641a-4227-4c56-b913-dcf731f28610" (UID: "b03f641a-4227-4c56-b913-dcf731f28610"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.894977 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b03f641a-4227-4c56-b913-dcf731f28610" (UID: "b03f641a-4227-4c56-b913-dcf731f28610"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.908668 4877 generic.go:334] "Generic (PLEG): container finished" podID="b03f641a-4227-4c56-b913-dcf731f28610" containerID="46a3804b7a77972978342a65a425b938b23035e2729185d6078e8901f4db7e9c" exitCode=0 Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.908734 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b03f641a-4227-4c56-b913-dcf731f28610","Type":"ContainerDied","Data":"46a3804b7a77972978342a65a425b938b23035e2729185d6078e8901f4db7e9c"} Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.908772 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b03f641a-4227-4c56-b913-dcf731f28610","Type":"ContainerDied","Data":"e058e4c62349e2eb16a42a2610c71f443e0877f3c8fc584dd337ae8a0dcc4797"} Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.908797 4877 scope.go:117] "RemoveContainer" containerID="5d0afdae9f6033162dffbbbf62109635e0ac0cf04ca65c662cb3280b57aabc91" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.909031 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.944268 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.944301 4877 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.978110 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b03f641a-4227-4c56-b913-dcf731f28610" (UID: "b03f641a-4227-4c56-b913-dcf731f28610"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:45 crc kubenswrapper[4877]: I1211 18:20:45.989680 4877 scope.go:117] "RemoveContainer" containerID="f12abb5035b5b8eaf4bfb73a43f403f2d77be29a5acc9a19019d40356afcda86" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.012125 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-config-data" (OuterVolumeSpecName: "config-data") pod "b03f641a-4227-4c56-b913-dcf731f28610" (UID: "b03f641a-4227-4c56-b913-dcf731f28610"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.015196 4877 scope.go:117] "RemoveContainer" containerID="46a3804b7a77972978342a65a425b938b23035e2729185d6078e8901f4db7e9c" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.044168 4877 scope.go:117] "RemoveContainer" containerID="b8ce86310f9383b20d22666dc2645ff77cc86cff71b495c4d53220afc88eed7a" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.047596 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.047634 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b03f641a-4227-4c56-b913-dcf731f28610-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.072940 4877 scope.go:117] "RemoveContainer" containerID="5d0afdae9f6033162dffbbbf62109635e0ac0cf04ca65c662cb3280b57aabc91" Dec 11 18:20:46 crc kubenswrapper[4877]: E1211 18:20:46.073536 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d0afdae9f6033162dffbbbf62109635e0ac0cf04ca65c662cb3280b57aabc91\": container with ID starting with 5d0afdae9f6033162dffbbbf62109635e0ac0cf04ca65c662cb3280b57aabc91 not found: ID does not exist" containerID="5d0afdae9f6033162dffbbbf62109635e0ac0cf04ca65c662cb3280b57aabc91" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.073565 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0afdae9f6033162dffbbbf62109635e0ac0cf04ca65c662cb3280b57aabc91"} err="failed to get container status \"5d0afdae9f6033162dffbbbf62109635e0ac0cf04ca65c662cb3280b57aabc91\": rpc error: code = NotFound desc = could not find container \"5d0afdae9f6033162dffbbbf62109635e0ac0cf04ca65c662cb3280b57aabc91\": container with ID starting with 5d0afdae9f6033162dffbbbf62109635e0ac0cf04ca65c662cb3280b57aabc91 not found: ID does not exist" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.073588 4877 scope.go:117] "RemoveContainer" containerID="f12abb5035b5b8eaf4bfb73a43f403f2d77be29a5acc9a19019d40356afcda86" Dec 11 18:20:46 crc kubenswrapper[4877]: E1211 18:20:46.073867 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f12abb5035b5b8eaf4bfb73a43f403f2d77be29a5acc9a19019d40356afcda86\": container with ID starting with f12abb5035b5b8eaf4bfb73a43f403f2d77be29a5acc9a19019d40356afcda86 not found: ID does not exist" containerID="f12abb5035b5b8eaf4bfb73a43f403f2d77be29a5acc9a19019d40356afcda86" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.073885 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f12abb5035b5b8eaf4bfb73a43f403f2d77be29a5acc9a19019d40356afcda86"} err="failed to get container status \"f12abb5035b5b8eaf4bfb73a43f403f2d77be29a5acc9a19019d40356afcda86\": rpc error: code = NotFound desc = could not find container \"f12abb5035b5b8eaf4bfb73a43f403f2d77be29a5acc9a19019d40356afcda86\": container with ID starting with f12abb5035b5b8eaf4bfb73a43f403f2d77be29a5acc9a19019d40356afcda86 not found: ID does not exist" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.073900 4877 scope.go:117] "RemoveContainer" containerID="46a3804b7a77972978342a65a425b938b23035e2729185d6078e8901f4db7e9c" Dec 11 18:20:46 crc kubenswrapper[4877]: E1211 18:20:46.074359 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a3804b7a77972978342a65a425b938b23035e2729185d6078e8901f4db7e9c\": container with ID starting with 46a3804b7a77972978342a65a425b938b23035e2729185d6078e8901f4db7e9c not found: ID does not exist" containerID="46a3804b7a77972978342a65a425b938b23035e2729185d6078e8901f4db7e9c" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.074764 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a3804b7a77972978342a65a425b938b23035e2729185d6078e8901f4db7e9c"} err="failed to get container status \"46a3804b7a77972978342a65a425b938b23035e2729185d6078e8901f4db7e9c\": rpc error: code = NotFound desc = could not find container \"46a3804b7a77972978342a65a425b938b23035e2729185d6078e8901f4db7e9c\": container with ID starting with 46a3804b7a77972978342a65a425b938b23035e2729185d6078e8901f4db7e9c not found: ID does not exist" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.074779 4877 scope.go:117] "RemoveContainer" containerID="b8ce86310f9383b20d22666dc2645ff77cc86cff71b495c4d53220afc88eed7a" Dec 11 18:20:46 crc kubenswrapper[4877]: E1211 18:20:46.075093 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8ce86310f9383b20d22666dc2645ff77cc86cff71b495c4d53220afc88eed7a\": container with ID starting with b8ce86310f9383b20d22666dc2645ff77cc86cff71b495c4d53220afc88eed7a not found: ID does not exist" containerID="b8ce86310f9383b20d22666dc2645ff77cc86cff71b495c4d53220afc88eed7a" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.075120 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8ce86310f9383b20d22666dc2645ff77cc86cff71b495c4d53220afc88eed7a"} err="failed to get container status \"b8ce86310f9383b20d22666dc2645ff77cc86cff71b495c4d53220afc88eed7a\": rpc error: code = NotFound desc = could not find container \"b8ce86310f9383b20d22666dc2645ff77cc86cff71b495c4d53220afc88eed7a\": container with ID starting with b8ce86310f9383b20d22666dc2645ff77cc86cff71b495c4d53220afc88eed7a not found: ID does not exist" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.257273 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.312913 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.326495 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:46 crc kubenswrapper[4877]: E1211 18:20:46.327048 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="ceilometer-notification-agent" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.327078 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="ceilometer-notification-agent" Dec 11 18:20:46 crc kubenswrapper[4877]: E1211 18:20:46.327113 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="ceilometer-central-agent" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.327123 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="ceilometer-central-agent" Dec 11 18:20:46 crc kubenswrapper[4877]: E1211 18:20:46.327139 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="sg-core" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.327148 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="sg-core" Dec 11 18:20:46 crc kubenswrapper[4877]: E1211 18:20:46.327176 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="proxy-httpd" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.327185 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="proxy-httpd" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.327455 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="ceilometer-central-agent" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.327488 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="ceilometer-notification-agent" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.327508 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="proxy-httpd" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.327519 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03f641a-4227-4c56-b913-dcf731f28610" containerName="sg-core" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.335567 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.338072 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.338544 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.338823 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.339189 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.460060 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-log-httpd\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.460118 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.460154 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-run-httpd\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.460431 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.460487 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-config-data\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.460516 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.460564 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66wfr\" (UniqueName: \"kubernetes.io/projected/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-kube-api-access-66wfr\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.460640 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-scripts\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.562257 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-log-httpd\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.562301 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.562326 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-run-httpd\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.562438 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.562458 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-config-data\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.562475 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.562511 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66wfr\" (UniqueName: \"kubernetes.io/projected/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-kube-api-access-66wfr\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.562585 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-scripts\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.562842 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-log-httpd\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.563767 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-run-httpd\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.567776 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.567834 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-scripts\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.569551 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-config-data\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.570192 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.579610 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.585841 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66wfr\" (UniqueName: \"kubernetes.io/projected/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-kube-api-access-66wfr\") pod \"ceilometer-0\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.638915 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.639735 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.639859 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.641484 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77165bed566223956d79451be46ee9e0e54607425e94b061474a87842819a95a"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.641628 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://77165bed566223956d79451be46ee9e0e54607425e94b061474a87842819a95a" gracePeriod=600 Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.689895 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.926063 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="77165bed566223956d79451be46ee9e0e54607425e94b061474a87842819a95a" exitCode=0 Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.926245 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"77165bed566223956d79451be46ee9e0e54607425e94b061474a87842819a95a"} Dec 11 18:20:46 crc kubenswrapper[4877]: I1211 18:20:46.926757 4877 scope.go:117] "RemoveContainer" containerID="bf1d9959e41610cc03f269ef917fbce5242b11790b9a8a9c1fa1169950769bd5" Dec 11 18:20:47 crc kubenswrapper[4877]: I1211 18:20:47.203174 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:47 crc kubenswrapper[4877]: I1211 18:20:47.228904 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b03f641a-4227-4c56-b913-dcf731f28610" path="/var/lib/kubelet/pods/b03f641a-4227-4c56-b913-dcf731f28610/volumes" Dec 11 18:20:47 crc kubenswrapper[4877]: I1211 18:20:47.948088 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"98fe33bbeaf8100bfd51bbde45283bda4602cbed36f4b77a0f7eb531f2dd1491"} Dec 11 18:20:47 crc kubenswrapper[4877]: I1211 18:20:47.960949 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5","Type":"ContainerStarted","Data":"150dd964b53f871369ab749d5a0d5a60860ca5e094a986986fc8c5312afe7c4e"} Dec 11 18:20:48 crc kubenswrapper[4877]: I1211 18:20:48.975421 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5","Type":"ContainerStarted","Data":"aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18"} Dec 11 18:20:49 crc kubenswrapper[4877]: I1211 18:20:49.987653 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5","Type":"ContainerStarted","Data":"6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c"} Dec 11 18:20:51 crc kubenswrapper[4877]: I1211 18:20:51.002051 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5","Type":"ContainerStarted","Data":"6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f"} Dec 11 18:20:51 crc kubenswrapper[4877]: I1211 18:20:51.232686 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 11 18:20:53 crc kubenswrapper[4877]: I1211 18:20:53.040009 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5","Type":"ContainerStarted","Data":"c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b"} Dec 11 18:20:53 crc kubenswrapper[4877]: I1211 18:20:53.040739 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 18:20:53 crc kubenswrapper[4877]: I1211 18:20:53.078054 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.539809145 podStartE2EDuration="7.078027385s" podCreationTimestamp="2025-12-11 18:20:46 +0000 UTC" firstStartedPulling="2025-12-11 18:20:47.207507886 +0000 UTC m=+1208.233751930" lastFinishedPulling="2025-12-11 18:20:51.745726126 +0000 UTC m=+1212.771970170" observedRunningTime="2025-12-11 18:20:53.068497906 +0000 UTC m=+1214.094741960" watchObservedRunningTime="2025-12-11 18:20:53.078027385 +0000 UTC m=+1214.104271429" Dec 11 18:20:53 crc kubenswrapper[4877]: I1211 18:20:53.085329 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 18:20:53 crc kubenswrapper[4877]: I1211 18:20:53.085901 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 18:20:53 crc kubenswrapper[4877]: I1211 18:20:53.090223 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 18:20:53 crc kubenswrapper[4877]: I1211 18:20:53.090765 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 18:20:54 crc kubenswrapper[4877]: I1211 18:20:54.053634 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 18:20:54 crc kubenswrapper[4877]: I1211 18:20:54.059613 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.135339 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-dlfml"] Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.139714 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.159644 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-dlfml"] Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.242276 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-config\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.242336 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v44rh\" (UniqueName: \"kubernetes.io/projected/e1069a2c-3591-4093-951b-5de43a45cdb6-kube-api-access-v44rh\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.242383 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.242886 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.243009 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.243068 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.254649 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.254965 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6f92e6d6-4834-426a-b69b-5f403074e948" containerName="nova-api-log" containerID="cri-o://8fc94228dd6405a999716b24ad61fd3a8f97377e74924b02e3372528394e9f67" gracePeriod=30 Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.255091 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6f92e6d6-4834-426a-b69b-5f403074e948" containerName="nova-api-api" containerID="cri-o://b6490f26460b4fa8fa87e401766c7f70fa476081d02dfffe0d002ffa5500ad82" gracePeriod=30 Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.297068 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.297400 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="ceilometer-central-agent" containerID="cri-o://aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18" gracePeriod=30 Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.297492 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="proxy-httpd" containerID="cri-o://c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b" gracePeriod=30 Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.297492 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="ceilometer-notification-agent" containerID="cri-o://6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c" gracePeriod=30 Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.297492 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="sg-core" containerID="cri-o://6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f" gracePeriod=30 Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.345212 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.345589 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.345689 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.345826 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-config\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.345916 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v44rh\" (UniqueName: \"kubernetes.io/projected/e1069a2c-3591-4093-951b-5de43a45cdb6-kube-api-access-v44rh\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.346011 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.346646 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.346713 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.347317 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-config\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.347460 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.347480 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.373435 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v44rh\" (UniqueName: \"kubernetes.io/projected/e1069a2c-3591-4093-951b-5de43a45cdb6-kube-api-access-v44rh\") pod \"dnsmasq-dns-89c5cd4d5-dlfml\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:57 crc kubenswrapper[4877]: I1211 18:20:57.466190 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.017296 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-dlfml"] Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.096750 4877 generic.go:334] "Generic (PLEG): container finished" podID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerID="c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b" exitCode=0 Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.097107 4877 generic.go:334] "Generic (PLEG): container finished" podID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerID="6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f" exitCode=2 Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.096826 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5","Type":"ContainerDied","Data":"c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b"} Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.097163 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5","Type":"ContainerDied","Data":"6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f"} Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.098781 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" event={"ID":"e1069a2c-3591-4093-951b-5de43a45cdb6","Type":"ContainerStarted","Data":"baae782f21455d0467186e0b10657da68be91f7b8136823c791228be59e77c96"} Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.101548 4877 generic.go:334] "Generic (PLEG): container finished" podID="6f92e6d6-4834-426a-b69b-5f403074e948" containerID="8fc94228dd6405a999716b24ad61fd3a8f97377e74924b02e3372528394e9f67" exitCode=143 Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.101576 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6f92e6d6-4834-426a-b69b-5f403074e948","Type":"ContainerDied","Data":"8fc94228dd6405a999716b24ad61fd3a8f97377e74924b02e3372528394e9f67"} Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.866541 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.990725 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-run-httpd\") pod \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.990874 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-sg-core-conf-yaml\") pod \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.990900 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-combined-ca-bundle\") pod \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.990964 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-config-data\") pod \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.990991 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66wfr\" (UniqueName: \"kubernetes.io/projected/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-kube-api-access-66wfr\") pod \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.991183 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-scripts\") pod \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.991249 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-log-httpd\") pod \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.991310 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-ceilometer-tls-certs\") pod \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\" (UID: \"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5\") " Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.993370 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" (UID: "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.994387 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" (UID: "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:20:58 crc kubenswrapper[4877]: I1211 18:20:58.999507 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-kube-api-access-66wfr" (OuterVolumeSpecName: "kube-api-access-66wfr") pod "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" (UID: "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5"). InnerVolumeSpecName "kube-api-access-66wfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.027564 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-scripts" (OuterVolumeSpecName: "scripts") pod "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" (UID: "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.030221 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" (UID: "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.068351 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" (UID: "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.094733 4877 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.094782 4877 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.094797 4877 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.094811 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66wfr\" (UniqueName: \"kubernetes.io/projected/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-kube-api-access-66wfr\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.094826 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.094840 4877 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.121852 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" (UID: "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.124500 4877 generic.go:334] "Generic (PLEG): container finished" podID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerID="6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c" exitCode=0 Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.124535 4877 generic.go:334] "Generic (PLEG): container finished" podID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerID="aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18" exitCode=0 Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.124638 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5","Type":"ContainerDied","Data":"6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c"} Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.124683 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5","Type":"ContainerDied","Data":"aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18"} Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.124699 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5","Type":"ContainerDied","Data":"150dd964b53f871369ab749d5a0d5a60860ca5e094a986986fc8c5312afe7c4e"} Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.124705 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.124722 4877 scope.go:117] "RemoveContainer" containerID="c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.128473 4877 generic.go:334] "Generic (PLEG): container finished" podID="e1069a2c-3591-4093-951b-5de43a45cdb6" containerID="bd6c6fa5d2c000aa8df08c1d0961b00430b4136cfd8ce8dd8db3d35683049651" exitCode=0 Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.128523 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" event={"ID":"e1069a2c-3591-4093-951b-5de43a45cdb6","Type":"ContainerDied","Data":"bd6c6fa5d2c000aa8df08c1d0961b00430b4136cfd8ce8dd8db3d35683049651"} Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.129584 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-config-data" (OuterVolumeSpecName: "config-data") pod "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" (UID: "c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.190554 4877 scope.go:117] "RemoveContainer" containerID="6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.196977 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.197005 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.223581 4877 scope.go:117] "RemoveContainer" containerID="6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.251332 4877 scope.go:117] "RemoveContainer" containerID="aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.276779 4877 scope.go:117] "RemoveContainer" containerID="c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b" Dec 11 18:20:59 crc kubenswrapper[4877]: E1211 18:20:59.277196 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b\": container with ID starting with c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b not found: ID does not exist" containerID="c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.277224 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b"} err="failed to get container status \"c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b\": rpc error: code = NotFound desc = could not find container \"c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b\": container with ID starting with c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b not found: ID does not exist" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.277249 4877 scope.go:117] "RemoveContainer" containerID="6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f" Dec 11 18:20:59 crc kubenswrapper[4877]: E1211 18:20:59.277673 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f\": container with ID starting with 6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f not found: ID does not exist" containerID="6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.277755 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f"} err="failed to get container status \"6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f\": rpc error: code = NotFound desc = could not find container \"6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f\": container with ID starting with 6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f not found: ID does not exist" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.277795 4877 scope.go:117] "RemoveContainer" containerID="6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c" Dec 11 18:20:59 crc kubenswrapper[4877]: E1211 18:20:59.278172 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c\": container with ID starting with 6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c not found: ID does not exist" containerID="6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.278200 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c"} err="failed to get container status \"6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c\": rpc error: code = NotFound desc = could not find container \"6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c\": container with ID starting with 6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c not found: ID does not exist" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.278217 4877 scope.go:117] "RemoveContainer" containerID="aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18" Dec 11 18:20:59 crc kubenswrapper[4877]: E1211 18:20:59.278469 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18\": container with ID starting with aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18 not found: ID does not exist" containerID="aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.278492 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18"} err="failed to get container status \"aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18\": rpc error: code = NotFound desc = could not find container \"aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18\": container with ID starting with aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18 not found: ID does not exist" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.278509 4877 scope.go:117] "RemoveContainer" containerID="c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.278713 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b"} err="failed to get container status \"c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b\": rpc error: code = NotFound desc = could not find container \"c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b\": container with ID starting with c9dfc08a3453d536abef40c9839c908d687d70714b9c39f5bdf4a72ed6daaa0b not found: ID does not exist" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.278728 4877 scope.go:117] "RemoveContainer" containerID="6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.278905 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f"} err="failed to get container status \"6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f\": rpc error: code = NotFound desc = could not find container \"6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f\": container with ID starting with 6de87fe2edfcb5d46b80cba5c194fff73b679cc67af26d608fd04b99549c695f not found: ID does not exist" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.278926 4877 scope.go:117] "RemoveContainer" containerID="6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.279139 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c"} err="failed to get container status \"6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c\": rpc error: code = NotFound desc = could not find container \"6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c\": container with ID starting with 6585a2944ec07be13848d86a69d3fb6d5d7c52ef6855c1dc98c03d366205530c not found: ID does not exist" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.279155 4877 scope.go:117] "RemoveContainer" containerID="aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.279365 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18"} err="failed to get container status \"aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18\": rpc error: code = NotFound desc = could not find container \"aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18\": container with ID starting with aef23b0772993a0f67bb14a9389b9a0f3905fdc132109f1db17c9d640f854a18 not found: ID does not exist" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.467158 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.486354 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.496114 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:59 crc kubenswrapper[4877]: E1211 18:20:59.496662 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="ceilometer-notification-agent" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.496689 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="ceilometer-notification-agent" Dec 11 18:20:59 crc kubenswrapper[4877]: E1211 18:20:59.496717 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="proxy-httpd" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.496727 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="proxy-httpd" Dec 11 18:20:59 crc kubenswrapper[4877]: E1211 18:20:59.496748 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="ceilometer-central-agent" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.496756 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="ceilometer-central-agent" Dec 11 18:20:59 crc kubenswrapper[4877]: E1211 18:20:59.496798 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="sg-core" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.496807 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="sg-core" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.497061 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="proxy-httpd" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.497087 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="ceilometer-central-agent" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.497107 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="ceilometer-notification-agent" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.497123 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" containerName="sg-core" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.499282 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.502547 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.504736 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.504949 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.512718 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.604684 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.605156 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-log-httpd\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.605192 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-858r4\" (UniqueName: \"kubernetes.io/projected/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-kube-api-access-858r4\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.605221 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-scripts\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.605250 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.605273 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-config-data\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.605310 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.605360 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-run-httpd\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.615601 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:20:59 crc kubenswrapper[4877]: E1211 18:20:59.617853 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-858r4 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="565f4b9d-3cef-4447-bcb7-5db1490e4e1a" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.708749 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.708817 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-config-data\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.708864 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.708903 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-run-httpd\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.709075 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.709111 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-log-httpd\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.709143 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-858r4\" (UniqueName: \"kubernetes.io/projected/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-kube-api-access-858r4\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.709174 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-scripts\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.710556 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-run-httpd\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.710880 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-log-httpd\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.715310 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.718837 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.719296 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-scripts\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.719328 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-config-data\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.719555 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:20:59 crc kubenswrapper[4877]: I1211 18:20:59.727088 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-858r4\" (UniqueName: \"kubernetes.io/projected/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-kube-api-access-858r4\") pod \"ceilometer-0\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " pod="openstack/ceilometer-0" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.145960 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" event={"ID":"e1069a2c-3591-4093-951b-5de43a45cdb6","Type":"ContainerStarted","Data":"647280d5c3baeb8c0e2d5671448b61ebfabc051701f821ab1089a1b6903a703b"} Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.146047 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.149319 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.162909 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.167207 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" podStartSLOduration=3.167184759 podStartE2EDuration="3.167184759s" podCreationTimestamp="2025-12-11 18:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:21:00.163411846 +0000 UTC m=+1221.189655930" watchObservedRunningTime="2025-12-11 18:21:00.167184759 +0000 UTC m=+1221.193428813" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.322538 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-858r4\" (UniqueName: \"kubernetes.io/projected/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-kube-api-access-858r4\") pod \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.323677 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-run-httpd\") pod \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.323734 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-scripts\") pod \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.323771 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-combined-ca-bundle\") pod \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.323818 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-sg-core-conf-yaml\") pod \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.324059 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-config-data\") pod \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.324213 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "565f4b9d-3cef-4447-bcb7-5db1490e4e1a" (UID: "565f4b9d-3cef-4447-bcb7-5db1490e4e1a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.324342 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-ceilometer-tls-certs\") pod \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.324721 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-log-httpd\") pod \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\" (UID: \"565f4b9d-3cef-4447-bcb7-5db1490e4e1a\") " Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.325182 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "565f4b9d-3cef-4447-bcb7-5db1490e4e1a" (UID: "565f4b9d-3cef-4447-bcb7-5db1490e4e1a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.325661 4877 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.325693 4877 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.329451 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "565f4b9d-3cef-4447-bcb7-5db1490e4e1a" (UID: "565f4b9d-3cef-4447-bcb7-5db1490e4e1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.329551 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-config-data" (OuterVolumeSpecName: "config-data") pod "565f4b9d-3cef-4447-bcb7-5db1490e4e1a" (UID: "565f4b9d-3cef-4447-bcb7-5db1490e4e1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.329924 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-kube-api-access-858r4" (OuterVolumeSpecName: "kube-api-access-858r4") pod "565f4b9d-3cef-4447-bcb7-5db1490e4e1a" (UID: "565f4b9d-3cef-4447-bcb7-5db1490e4e1a"). InnerVolumeSpecName "kube-api-access-858r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.330323 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "565f4b9d-3cef-4447-bcb7-5db1490e4e1a" (UID: "565f4b9d-3cef-4447-bcb7-5db1490e4e1a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.332048 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "565f4b9d-3cef-4447-bcb7-5db1490e4e1a" (UID: "565f4b9d-3cef-4447-bcb7-5db1490e4e1a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.338916 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-scripts" (OuterVolumeSpecName: "scripts") pod "565f4b9d-3cef-4447-bcb7-5db1490e4e1a" (UID: "565f4b9d-3cef-4447-bcb7-5db1490e4e1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.428634 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.429208 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.429226 4877 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.429238 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.429258 4877 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.429271 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-858r4\" (UniqueName: \"kubernetes.io/projected/565f4b9d-3cef-4447-bcb7-5db1490e4e1a-kube-api-access-858r4\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:00 crc kubenswrapper[4877]: I1211 18:21:00.988539 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.145764 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f92e6d6-4834-426a-b69b-5f403074e948-combined-ca-bundle\") pod \"6f92e6d6-4834-426a-b69b-5f403074e948\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.146005 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f92e6d6-4834-426a-b69b-5f403074e948-logs\") pod \"6f92e6d6-4834-426a-b69b-5f403074e948\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.146055 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfqxz\" (UniqueName: \"kubernetes.io/projected/6f92e6d6-4834-426a-b69b-5f403074e948-kube-api-access-sfqxz\") pod \"6f92e6d6-4834-426a-b69b-5f403074e948\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.146088 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f92e6d6-4834-426a-b69b-5f403074e948-config-data\") pod \"6f92e6d6-4834-426a-b69b-5f403074e948\" (UID: \"6f92e6d6-4834-426a-b69b-5f403074e948\") " Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.147891 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f92e6d6-4834-426a-b69b-5f403074e948-logs" (OuterVolumeSpecName: "logs") pod "6f92e6d6-4834-426a-b69b-5f403074e948" (UID: "6f92e6d6-4834-426a-b69b-5f403074e948"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.175730 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f92e6d6-4834-426a-b69b-5f403074e948-kube-api-access-sfqxz" (OuterVolumeSpecName: "kube-api-access-sfqxz") pod "6f92e6d6-4834-426a-b69b-5f403074e948" (UID: "6f92e6d6-4834-426a-b69b-5f403074e948"). InnerVolumeSpecName "kube-api-access-sfqxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.182345 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.182396 4877 generic.go:334] "Generic (PLEG): container finished" podID="6f92e6d6-4834-426a-b69b-5f403074e948" containerID="b6490f26460b4fa8fa87e401766c7f70fa476081d02dfffe0d002ffa5500ad82" exitCode=0 Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.182482 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6f92e6d6-4834-426a-b69b-5f403074e948","Type":"ContainerDied","Data":"b6490f26460b4fa8fa87e401766c7f70fa476081d02dfffe0d002ffa5500ad82"} Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.182524 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6f92e6d6-4834-426a-b69b-5f403074e948","Type":"ContainerDied","Data":"2e6abcfff9580a2e39df5294de17a3075843f8126d873deb1843c3c4632c4f22"} Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.183972 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.184678 4877 scope.go:117] "RemoveContainer" containerID="b6490f26460b4fa8fa87e401766c7f70fa476081d02dfffe0d002ffa5500ad82" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.194597 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f92e6d6-4834-426a-b69b-5f403074e948-config-data" (OuterVolumeSpecName: "config-data") pod "6f92e6d6-4834-426a-b69b-5f403074e948" (UID: "6f92e6d6-4834-426a-b69b-5f403074e948"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.224793 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f92e6d6-4834-426a-b69b-5f403074e948-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f92e6d6-4834-426a-b69b-5f403074e948" (UID: "6f92e6d6-4834-426a-b69b-5f403074e948"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.243761 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5" path="/var/lib/kubelet/pods/c8ace7fc-a7cb-4e59-9cb5-5e548eef83e5/volumes" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.279755 4877 scope.go:117] "RemoveContainer" containerID="8fc94228dd6405a999716b24ad61fd3a8f97377e74924b02e3372528394e9f67" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.283945 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f92e6d6-4834-426a-b69b-5f403074e948-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.283977 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfqxz\" (UniqueName: \"kubernetes.io/projected/6f92e6d6-4834-426a-b69b-5f403074e948-kube-api-access-sfqxz\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.284653 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f92e6d6-4834-426a-b69b-5f403074e948-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.284678 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f92e6d6-4834-426a-b69b-5f403074e948-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.328753 4877 scope.go:117] "RemoveContainer" containerID="b6490f26460b4fa8fa87e401766c7f70fa476081d02dfffe0d002ffa5500ad82" Dec 11 18:21:01 crc kubenswrapper[4877]: E1211 18:21:01.336668 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6490f26460b4fa8fa87e401766c7f70fa476081d02dfffe0d002ffa5500ad82\": container with ID starting with b6490f26460b4fa8fa87e401766c7f70fa476081d02dfffe0d002ffa5500ad82 not found: ID does not exist" containerID="b6490f26460b4fa8fa87e401766c7f70fa476081d02dfffe0d002ffa5500ad82" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.336718 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6490f26460b4fa8fa87e401766c7f70fa476081d02dfffe0d002ffa5500ad82"} err="failed to get container status \"b6490f26460b4fa8fa87e401766c7f70fa476081d02dfffe0d002ffa5500ad82\": rpc error: code = NotFound desc = could not find container \"b6490f26460b4fa8fa87e401766c7f70fa476081d02dfffe0d002ffa5500ad82\": container with ID starting with b6490f26460b4fa8fa87e401766c7f70fa476081d02dfffe0d002ffa5500ad82 not found: ID does not exist" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.336751 4877 scope.go:117] "RemoveContainer" containerID="8fc94228dd6405a999716b24ad61fd3a8f97377e74924b02e3372528394e9f67" Dec 11 18:21:01 crc kubenswrapper[4877]: E1211 18:21:01.337206 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc94228dd6405a999716b24ad61fd3a8f97377e74924b02e3372528394e9f67\": container with ID starting with 8fc94228dd6405a999716b24ad61fd3a8f97377e74924b02e3372528394e9f67 not found: ID does not exist" containerID="8fc94228dd6405a999716b24ad61fd3a8f97377e74924b02e3372528394e9f67" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.337229 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc94228dd6405a999716b24ad61fd3a8f97377e74924b02e3372528394e9f67"} err="failed to get container status \"8fc94228dd6405a999716b24ad61fd3a8f97377e74924b02e3372528394e9f67\": rpc error: code = NotFound desc = could not find container \"8fc94228dd6405a999716b24ad61fd3a8f97377e74924b02e3372528394e9f67\": container with ID starting with 8fc94228dd6405a999716b24ad61fd3a8f97377e74924b02e3372528394e9f67 not found: ID does not exist" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.356517 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.365904 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.379860 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:21:01 crc kubenswrapper[4877]: E1211 18:21:01.380499 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f92e6d6-4834-426a-b69b-5f403074e948" containerName="nova-api-log" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.380520 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f92e6d6-4834-426a-b69b-5f403074e948" containerName="nova-api-log" Dec 11 18:21:01 crc kubenswrapper[4877]: E1211 18:21:01.380558 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f92e6d6-4834-426a-b69b-5f403074e948" containerName="nova-api-api" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.380568 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f92e6d6-4834-426a-b69b-5f403074e948" containerName="nova-api-api" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.380842 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f92e6d6-4834-426a-b69b-5f403074e948" containerName="nova-api-api" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.380862 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f92e6d6-4834-426a-b69b-5f403074e948" containerName="nova-api-log" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.383421 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.386719 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.387037 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.387089 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.391207 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.494619 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2235a50-9478-4081-bdad-597e59773901-run-httpd\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.495194 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.495304 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-config-data\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.495340 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.495396 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68fjn\" (UniqueName: \"kubernetes.io/projected/f2235a50-9478-4081-bdad-597e59773901-kube-api-access-68fjn\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.495420 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-scripts\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.495437 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.495493 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2235a50-9478-4081-bdad-597e59773901-log-httpd\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.506138 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.525171 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.546510 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.548486 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.551662 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.551861 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.552014 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.558615 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.597648 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-config-data\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.597741 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.600089 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68fjn\" (UniqueName: \"kubernetes.io/projected/f2235a50-9478-4081-bdad-597e59773901-kube-api-access-68fjn\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.600191 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-scripts\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.600229 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.600443 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2235a50-9478-4081-bdad-597e59773901-log-httpd\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.600520 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2235a50-9478-4081-bdad-597e59773901-run-httpd\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.600591 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.601260 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2235a50-9478-4081-bdad-597e59773901-log-httpd\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.601459 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2235a50-9478-4081-bdad-597e59773901-run-httpd\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.605220 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.605234 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.605326 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-config-data\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.605779 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.606483 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2235a50-9478-4081-bdad-597e59773901-scripts\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.624201 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68fjn\" (UniqueName: \"kubernetes.io/projected/f2235a50-9478-4081-bdad-597e59773901-kube-api-access-68fjn\") pod \"ceilometer-0\" (UID: \"f2235a50-9478-4081-bdad-597e59773901\") " pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.703821 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-logs\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.703900 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-config-data\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.704145 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-public-tls-certs\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.704261 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.704424 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmt29\" (UniqueName: \"kubernetes.io/projected/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-kube-api-access-hmt29\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.704539 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.712360 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.807270 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-logs\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.807325 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-config-data\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.807389 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-public-tls-certs\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.808704 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-logs\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.808844 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.809236 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmt29\" (UniqueName: \"kubernetes.io/projected/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-kube-api-access-hmt29\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.809357 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.814214 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.814767 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.816912 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-public-tls-certs\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.819101 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-config-data\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.837308 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmt29\" (UniqueName: \"kubernetes.io/projected/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-kube-api-access-hmt29\") pod \"nova-api-0\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " pod="openstack/nova-api-0" Dec 11 18:21:01 crc kubenswrapper[4877]: I1211 18:21:01.870986 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 18:21:02 crc kubenswrapper[4877]: I1211 18:21:02.251349 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 18:21:02 crc kubenswrapper[4877]: I1211 18:21:02.261039 4877 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 18:21:02 crc kubenswrapper[4877]: I1211 18:21:02.391212 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 18:21:03 crc kubenswrapper[4877]: I1211 18:21:03.207690 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2235a50-9478-4081-bdad-597e59773901","Type":"ContainerStarted","Data":"35b81569056401c99173aef8a78106e5a5a55bd78550c2e551c5e2597eb81e34"} Dec 11 18:21:03 crc kubenswrapper[4877]: I1211 18:21:03.208015 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2235a50-9478-4081-bdad-597e59773901","Type":"ContainerStarted","Data":"a17ff23d9cc4e78f87bb9fc0bfefd0619317a54850eca8f35d4169a540cba5f5"} Dec 11 18:21:03 crc kubenswrapper[4877]: I1211 18:21:03.210594 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5d094ea-816e-4314-9cf1-4e8f7984ad2c","Type":"ContainerStarted","Data":"792c93494bd192477231d418ab90f6481f976f11ecd100ebd30d3aaf2bff2308"} Dec 11 18:21:03 crc kubenswrapper[4877]: I1211 18:21:03.210653 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5d094ea-816e-4314-9cf1-4e8f7984ad2c","Type":"ContainerStarted","Data":"274331a2befbe340a30cf7e2e7bdc2e88573a6fa6acb02b1cea0635fcd8b74fb"} Dec 11 18:21:03 crc kubenswrapper[4877]: I1211 18:21:03.210669 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5d094ea-816e-4314-9cf1-4e8f7984ad2c","Type":"ContainerStarted","Data":"0bfdd566a4f7a104ea954eece27548ba05225eec982f4ae735d09ece5ca9c735"} Dec 11 18:21:03 crc kubenswrapper[4877]: I1211 18:21:03.227422 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="565f4b9d-3cef-4447-bcb7-5db1490e4e1a" path="/var/lib/kubelet/pods/565f4b9d-3cef-4447-bcb7-5db1490e4e1a/volumes" Dec 11 18:21:03 crc kubenswrapper[4877]: I1211 18:21:03.228054 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f92e6d6-4834-426a-b69b-5f403074e948" path="/var/lib/kubelet/pods/6f92e6d6-4834-426a-b69b-5f403074e948/volumes" Dec 11 18:21:03 crc kubenswrapper[4877]: I1211 18:21:03.240244 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.240221165 podStartE2EDuration="2.240221165s" podCreationTimestamp="2025-12-11 18:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:21:03.23414488 +0000 UTC m=+1224.260388924" watchObservedRunningTime="2025-12-11 18:21:03.240221165 +0000 UTC m=+1224.266465209" Dec 11 18:21:04 crc kubenswrapper[4877]: I1211 18:21:04.224198 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2235a50-9478-4081-bdad-597e59773901","Type":"ContainerStarted","Data":"876ffa2380a64a92e516cb12597aeb98bcef4785f187a94648fb6163bcec848e"} Dec 11 18:21:05 crc kubenswrapper[4877]: I1211 18:21:05.236572 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2235a50-9478-4081-bdad-597e59773901","Type":"ContainerStarted","Data":"a809854be2e9a3649bc9291c7e13abddeaa336eb6fd4e5e12d524d567a25abf3"} Dec 11 18:21:06 crc kubenswrapper[4877]: I1211 18:21:06.253797 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2235a50-9478-4081-bdad-597e59773901","Type":"ContainerStarted","Data":"effc42b472f8dafb2da32235ce34841930bfebd783b4e834a9299aa6710f6357"} Dec 11 18:21:06 crc kubenswrapper[4877]: I1211 18:21:06.256510 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 18:21:06 crc kubenswrapper[4877]: I1211 18:21:06.295673 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.669542951 podStartE2EDuration="5.295646965s" podCreationTimestamp="2025-12-11 18:21:01 +0000 UTC" firstStartedPulling="2025-12-11 18:21:02.26077374 +0000 UTC m=+1223.287017784" lastFinishedPulling="2025-12-11 18:21:05.886877754 +0000 UTC m=+1226.913121798" observedRunningTime="2025-12-11 18:21:06.288418349 +0000 UTC m=+1227.314662393" watchObservedRunningTime="2025-12-11 18:21:06.295646965 +0000 UTC m=+1227.321891009" Dec 11 18:21:06 crc kubenswrapper[4877]: E1211 18:21:06.864802 4877 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fd7d7711402f756b52758d109c54db5ff263ccc52744cf9272dfa359a813e399/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fd7d7711402f756b52758d109c54db5ff263ccc52744cf9272dfa359a813e399/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_b03f641a-4227-4c56-b913-dcf731f28610/ceilometer-central-agent/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_b03f641a-4227-4c56-b913-dcf731f28610/ceilometer-central-agent/0.log: no such file or directory Dec 11 18:21:07 crc kubenswrapper[4877]: I1211 18:21:07.468668 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:21:07 crc kubenswrapper[4877]: I1211 18:21:07.565721 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-54z2g"] Dec 11 18:21:07 crc kubenswrapper[4877]: I1211 18:21:07.566010 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" podUID="a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" containerName="dnsmasq-dns" containerID="cri-o://37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941" gracePeriod=10 Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.105901 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.217017 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-dns-swift-storage-0\") pod \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.217080 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-config\") pod \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.217123 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-ovsdbserver-sb\") pod \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.217275 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spwkt\" (UniqueName: \"kubernetes.io/projected/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-kube-api-access-spwkt\") pod \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.217319 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-dns-svc\") pod \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.217399 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-ovsdbserver-nb\") pod \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\" (UID: \"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23\") " Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.224725 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-kube-api-access-spwkt" (OuterVolumeSpecName: "kube-api-access-spwkt") pod "a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" (UID: "a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23"). InnerVolumeSpecName "kube-api-access-spwkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.277201 4877 generic.go:334] "Generic (PLEG): container finished" podID="a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" containerID="37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941" exitCode=0 Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.278471 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.278452 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" event={"ID":"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23","Type":"ContainerDied","Data":"37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941"} Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.278690 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" event={"ID":"a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23","Type":"ContainerDied","Data":"4c39fbaac7103894703a33f9b9202f816a62ffb12b88d6047fcbacd9726792bb"} Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.278738 4877 scope.go:117] "RemoveContainer" containerID="37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.283603 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" (UID: "a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.283890 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" (UID: "a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.288797 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" (UID: "a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.299848 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-config" (OuterVolumeSpecName: "config") pod "a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" (UID: "a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.316804 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" (UID: "a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.320525 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.322055 4877 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.322101 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.322114 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.322126 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spwkt\" (UniqueName: \"kubernetes.io/projected/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-kube-api-access-spwkt\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.322136 4877 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.374973 4877 scope.go:117] "RemoveContainer" containerID="9b399a0956f55fe9cca580428c235833753bd28d98922721e1de236179fc78ac" Dec 11 18:21:08 crc kubenswrapper[4877]: E1211 18:21:08.380399 4877 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b96fb0f14e6ad841d00d2a4b413ca47ed65e2d987edb05595acd0ef6624ca45d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b96fb0f14e6ad841d00d2a4b413ca47ed65e2d987edb05595acd0ef6624ca45d/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_b03f641a-4227-4c56-b913-dcf731f28610/ceilometer-notification-agent/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_b03f641a-4227-4c56-b913-dcf731f28610/ceilometer-notification-agent/0.log: no such file or directory Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.405730 4877 scope.go:117] "RemoveContainer" containerID="37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941" Dec 11 18:21:08 crc kubenswrapper[4877]: E1211 18:21:08.406402 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941\": container with ID starting with 37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941 not found: ID does not exist" containerID="37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.406448 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941"} err="failed to get container status \"37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941\": rpc error: code = NotFound desc = could not find container \"37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941\": container with ID starting with 37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941 not found: ID does not exist" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.406479 4877 scope.go:117] "RemoveContainer" containerID="9b399a0956f55fe9cca580428c235833753bd28d98922721e1de236179fc78ac" Dec 11 18:21:08 crc kubenswrapper[4877]: E1211 18:21:08.406760 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b399a0956f55fe9cca580428c235833753bd28d98922721e1de236179fc78ac\": container with ID starting with 9b399a0956f55fe9cca580428c235833753bd28d98922721e1de236179fc78ac not found: ID does not exist" containerID="9b399a0956f55fe9cca580428c235833753bd28d98922721e1de236179fc78ac" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.406790 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b399a0956f55fe9cca580428c235833753bd28d98922721e1de236179fc78ac"} err="failed to get container status \"9b399a0956f55fe9cca580428c235833753bd28d98922721e1de236179fc78ac\": rpc error: code = NotFound desc = could not find container \"9b399a0956f55fe9cca580428c235833753bd28d98922721e1de236179fc78ac\": container with ID starting with 9b399a0956f55fe9cca580428c235833753bd28d98922721e1de236179fc78ac not found: ID does not exist" Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.664074 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-54z2g"] Dec 11 18:21:08 crc kubenswrapper[4877]: I1211 18:21:08.684126 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-54z2g"] Dec 11 18:21:08 crc kubenswrapper[4877]: W1211 18:21:08.754916 4877 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8ace7fc_a7cb_4e59_9cb5_5e548eef83e5.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8ace7fc_a7cb_4e59_9cb5_5e548eef83e5.slice: no such file or directory Dec 11 18:21:08 crc kubenswrapper[4877]: W1211 18:21:08.758634 4877 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod565f4b9d_3cef_4447_bcb7_5db1490e4e1a.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod565f4b9d_3cef_4447_bcb7_5db1490e4e1a.slice: no such file or directory Dec 11 18:21:09 crc kubenswrapper[4877]: E1211 18:21:09.038243 4877 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9cbb8b7_0fd9_40ec_b77f_39ea172c7e23.slice/crio-37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5b22f6a_5ada_4baa_abc8_21359515bb02.slice/crio-ad27f017570361709f76d4bbeece43d30b58f7e7909e17045a13da10bdccd242.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5b22f6a_5ada_4baa_abc8_21359515bb02.slice/crio-conmon-ad27f017570361709f76d4bbeece43d30b58f7e7909e17045a13da10bdccd242.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9cbb8b7_0fd9_40ec_b77f_39ea172c7e23.slice/crio-4c39fbaac7103894703a33f9b9202f816a62ffb12b88d6047fcbacd9726792bb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbed1dadc_5c80_4e4c_9342_460ceed5a7a2.slice/crio-conmon-3fdb7f3fe03d2cf69ec426d4a1747288e5b5ba72cee225d642a11a59bdf4df1c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbed1dadc_5c80_4e4c_9342_460ceed5a7a2.slice/crio-3fdb7f3fe03d2cf69ec426d4a1747288e5b5ba72cee225d642a11a59bdf4df1c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9cbb8b7_0fd9_40ec_b77f_39ea172c7e23.slice/crio-conmon-37c90b01f6b19e93822c91723f511bff2842b3ac07d8bc948fca2163fc095941.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9cbb8b7_0fd9_40ec_b77f_39ea172c7e23.slice\": RecentStats: unable to find data in memory cache]" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.239959 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" path="/var/lib/kubelet/pods/a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23/volumes" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.293467 4877 generic.go:334] "Generic (PLEG): container finished" podID="d5b22f6a-5ada-4baa-abc8-21359515bb02" containerID="ad27f017570361709f76d4bbeece43d30b58f7e7909e17045a13da10bdccd242" exitCode=137 Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.293562 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5b22f6a-5ada-4baa-abc8-21359515bb02","Type":"ContainerDied","Data":"ad27f017570361709f76d4bbeece43d30b58f7e7909e17045a13da10bdccd242"} Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.293620 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5b22f6a-5ada-4baa-abc8-21359515bb02","Type":"ContainerDied","Data":"8e238ff2a74f006f966362f4cc17056c5cd2a6593c1c3322b5f8e2d30efe314c"} Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.293631 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e238ff2a74f006f966362f4cc17056c5cd2a6593c1c3322b5f8e2d30efe314c" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.295488 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bed1dadc-5c80-4e4c-9342-460ceed5a7a2","Type":"ContainerDied","Data":"3fdb7f3fe03d2cf69ec426d4a1747288e5b5ba72cee225d642a11a59bdf4df1c"} Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.295538 4877 generic.go:334] "Generic (PLEG): container finished" podID="bed1dadc-5c80-4e4c-9342-460ceed5a7a2" containerID="3fdb7f3fe03d2cf69ec426d4a1747288e5b5ba72cee225d642a11a59bdf4df1c" exitCode=137 Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.295601 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bed1dadc-5c80-4e4c-9342-460ceed5a7a2","Type":"ContainerDied","Data":"fb1b49fb80acf4cf7a0e4afbecbafc739dc3eb13ca6ea55dc847a265febc904a"} Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.295639 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb1b49fb80acf4cf7a0e4afbecbafc739dc3eb13ca6ea55dc847a265febc904a" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.318129 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.335490 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.450987 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5b22f6a-5ada-4baa-abc8-21359515bb02-logs\") pod \"d5b22f6a-5ada-4baa-abc8-21359515bb02\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.451133 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b22f6a-5ada-4baa-abc8-21359515bb02-config-data\") pod \"d5b22f6a-5ada-4baa-abc8-21359515bb02\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.451170 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfd4x\" (UniqueName: \"kubernetes.io/projected/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-kube-api-access-cfd4x\") pod \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\" (UID: \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\") " Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.451190 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-config-data\") pod \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\" (UID: \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\") " Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.451258 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b22f6a-5ada-4baa-abc8-21359515bb02-combined-ca-bundle\") pod \"d5b22f6a-5ada-4baa-abc8-21359515bb02\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.451366 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-combined-ca-bundle\") pod \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\" (UID: \"bed1dadc-5c80-4e4c-9342-460ceed5a7a2\") " Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.451449 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x798\" (UniqueName: \"kubernetes.io/projected/d5b22f6a-5ada-4baa-abc8-21359515bb02-kube-api-access-4x798\") pod \"d5b22f6a-5ada-4baa-abc8-21359515bb02\" (UID: \"d5b22f6a-5ada-4baa-abc8-21359515bb02\") " Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.452841 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5b22f6a-5ada-4baa-abc8-21359515bb02-logs" (OuterVolumeSpecName: "logs") pod "d5b22f6a-5ada-4baa-abc8-21359515bb02" (UID: "d5b22f6a-5ada-4baa-abc8-21359515bb02"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.459578 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b22f6a-5ada-4baa-abc8-21359515bb02-kube-api-access-4x798" (OuterVolumeSpecName: "kube-api-access-4x798") pod "d5b22f6a-5ada-4baa-abc8-21359515bb02" (UID: "d5b22f6a-5ada-4baa-abc8-21359515bb02"). InnerVolumeSpecName "kube-api-access-4x798". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.459879 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-kube-api-access-cfd4x" (OuterVolumeSpecName: "kube-api-access-cfd4x") pod "bed1dadc-5c80-4e4c-9342-460ceed5a7a2" (UID: "bed1dadc-5c80-4e4c-9342-460ceed5a7a2"). InnerVolumeSpecName "kube-api-access-cfd4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.490532 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bed1dadc-5c80-4e4c-9342-460ceed5a7a2" (UID: "bed1dadc-5c80-4e4c-9342-460ceed5a7a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.491212 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b22f6a-5ada-4baa-abc8-21359515bb02-config-data" (OuterVolumeSpecName: "config-data") pod "d5b22f6a-5ada-4baa-abc8-21359515bb02" (UID: "d5b22f6a-5ada-4baa-abc8-21359515bb02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.494400 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b22f6a-5ada-4baa-abc8-21359515bb02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5b22f6a-5ada-4baa-abc8-21359515bb02" (UID: "d5b22f6a-5ada-4baa-abc8-21359515bb02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.495044 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-config-data" (OuterVolumeSpecName: "config-data") pod "bed1dadc-5c80-4e4c-9342-460ceed5a7a2" (UID: "bed1dadc-5c80-4e4c-9342-460ceed5a7a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.554535 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b22f6a-5ada-4baa-abc8-21359515bb02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.554576 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.554586 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x798\" (UniqueName: \"kubernetes.io/projected/d5b22f6a-5ada-4baa-abc8-21359515bb02-kube-api-access-4x798\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.554600 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5b22f6a-5ada-4baa-abc8-21359515bb02-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.554611 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b22f6a-5ada-4baa-abc8-21359515bb02-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.554621 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfd4x\" (UniqueName: \"kubernetes.io/projected/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-kube-api-access-cfd4x\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:09 crc kubenswrapper[4877]: I1211 18:21:09.554630 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed1dadc-5c80-4e4c-9342-460ceed5a7a2-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.305992 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.309601 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.355904 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.366222 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.377037 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.388194 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.405314 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 18:21:10 crc kubenswrapper[4877]: E1211 18:21:10.405948 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b22f6a-5ada-4baa-abc8-21359515bb02" containerName="nova-metadata-log" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.405973 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b22f6a-5ada-4baa-abc8-21359515bb02" containerName="nova-metadata-log" Dec 11 18:21:10 crc kubenswrapper[4877]: E1211 18:21:10.406013 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed1dadc-5c80-4e4c-9342-460ceed5a7a2" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.406022 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed1dadc-5c80-4e4c-9342-460ceed5a7a2" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 18:21:10 crc kubenswrapper[4877]: E1211 18:21:10.406040 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b22f6a-5ada-4baa-abc8-21359515bb02" containerName="nova-metadata-metadata" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.406048 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b22f6a-5ada-4baa-abc8-21359515bb02" containerName="nova-metadata-metadata" Dec 11 18:21:10 crc kubenswrapper[4877]: E1211 18:21:10.406060 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" containerName="init" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.406067 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" containerName="init" Dec 11 18:21:10 crc kubenswrapper[4877]: E1211 18:21:10.406084 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" containerName="dnsmasq-dns" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.406103 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" containerName="dnsmasq-dns" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.406362 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b22f6a-5ada-4baa-abc8-21359515bb02" containerName="nova-metadata-metadata" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.406402 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" containerName="dnsmasq-dns" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.406419 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed1dadc-5c80-4e4c-9342-460ceed5a7a2" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.406427 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b22f6a-5ada-4baa-abc8-21359515bb02" containerName="nova-metadata-log" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.407261 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.410276 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.413354 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.415313 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.419575 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.437845 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.451456 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.461513 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.462937 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.464703 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.586353 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sls7w\" (UniqueName: \"kubernetes.io/projected/36ccf898-82b0-4b83-baf8-cffd8484e91b-kube-api-access-sls7w\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.586425 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.586497 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.586581 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.586623 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36ccf898-82b0-4b83-baf8-cffd8484e91b-logs\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.586650 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.586679 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-config-data\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.586697 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlnbl\" (UniqueName: \"kubernetes.io/projected/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-kube-api-access-hlnbl\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.586714 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.586765 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.688291 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.688462 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.688537 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sls7w\" (UniqueName: \"kubernetes.io/projected/36ccf898-82b0-4b83-baf8-cffd8484e91b-kube-api-access-sls7w\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.688576 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.688662 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.688737 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.688779 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36ccf898-82b0-4b83-baf8-cffd8484e91b-logs\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.688825 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.688871 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-config-data\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.691530 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlnbl\" (UniqueName: \"kubernetes.io/projected/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-kube-api-access-hlnbl\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.695842 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.695858 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.696911 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.697972 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-config-data\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.699111 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.699757 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.700831 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.702161 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36ccf898-82b0-4b83-baf8-cffd8484e91b-logs\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.716063 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sls7w\" (UniqueName: \"kubernetes.io/projected/36ccf898-82b0-4b83-baf8-cffd8484e91b-kube-api-access-sls7w\") pod \"nova-metadata-0\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " pod="openstack/nova-metadata-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.723508 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlnbl\" (UniqueName: \"kubernetes.io/projected/9ab5e774-7a03-4065-9eb8-c68aaff8d6c6-kube-api-access-hlnbl\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.748427 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:10 crc kubenswrapper[4877]: I1211 18:21:10.776034 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 18:21:11 crc kubenswrapper[4877]: I1211 18:21:11.227802 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bed1dadc-5c80-4e4c-9342-460ceed5a7a2" path="/var/lib/kubelet/pods/bed1dadc-5c80-4e4c-9342-460ceed5a7a2/volumes" Dec 11 18:21:11 crc kubenswrapper[4877]: I1211 18:21:11.229133 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b22f6a-5ada-4baa-abc8-21359515bb02" path="/var/lib/kubelet/pods/d5b22f6a-5ada-4baa-abc8-21359515bb02/volumes" Dec 11 18:21:11 crc kubenswrapper[4877]: I1211 18:21:11.331339 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 18:21:11 crc kubenswrapper[4877]: I1211 18:21:11.432465 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:21:11 crc kubenswrapper[4877]: I1211 18:21:11.871658 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 18:21:11 crc kubenswrapper[4877]: I1211 18:21:11.871961 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 18:21:12 crc kubenswrapper[4877]: I1211 18:21:12.326570 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36ccf898-82b0-4b83-baf8-cffd8484e91b","Type":"ContainerStarted","Data":"f3728a8471f0e6790920341858f3599c11713e8a3102ad31edffcd730e5defbc"} Dec 11 18:21:12 crc kubenswrapper[4877]: I1211 18:21:12.326622 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36ccf898-82b0-4b83-baf8-cffd8484e91b","Type":"ContainerStarted","Data":"e7bb7b0d812fbb512b7303106f6a1b77ec1e214e882bf15bf3a0bc52af861b59"} Dec 11 18:21:12 crc kubenswrapper[4877]: I1211 18:21:12.326637 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36ccf898-82b0-4b83-baf8-cffd8484e91b","Type":"ContainerStarted","Data":"a0e7c4201fb6668389edfe7c4ee0a725b1babf97b2856cf85385f99acff7b7b9"} Dec 11 18:21:12 crc kubenswrapper[4877]: I1211 18:21:12.330073 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6","Type":"ContainerStarted","Data":"58fa13a04711e76cb4fc70e4144def374051a61fa51bf72916009f65d154b747"} Dec 11 18:21:12 crc kubenswrapper[4877]: I1211 18:21:12.333897 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9ab5e774-7a03-4065-9eb8-c68aaff8d6c6","Type":"ContainerStarted","Data":"25bd904ad359bb8369e541efa6a90bb0b61cd8598302d760799b516f79094052"} Dec 11 18:21:12 crc kubenswrapper[4877]: I1211 18:21:12.357225 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.357194778 podStartE2EDuration="2.357194778s" podCreationTimestamp="2025-12-11 18:21:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:21:12.347340011 +0000 UTC m=+1233.373584075" watchObservedRunningTime="2025-12-11 18:21:12.357194778 +0000 UTC m=+1233.383438842" Dec 11 18:21:12 crc kubenswrapper[4877]: I1211 18:21:12.386288 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.386252767 podStartE2EDuration="2.386252767s" podCreationTimestamp="2025-12-11 18:21:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:21:12.375954737 +0000 UTC m=+1233.402198781" watchObservedRunningTime="2025-12-11 18:21:12.386252767 +0000 UTC m=+1233.412496831" Dec 11 18:21:12 crc kubenswrapper[4877]: I1211 18:21:12.888915 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.196:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 18:21:12 crc kubenswrapper[4877]: I1211 18:21:12.888978 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.196:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 18:21:13 crc kubenswrapper[4877]: I1211 18:21:13.076165 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-54z2g" podUID="a9cbb8b7-0fd9-40ec-b77f-39ea172c7e23" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.162:5353: i/o timeout" Dec 11 18:21:15 crc kubenswrapper[4877]: I1211 18:21:15.749153 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:15 crc kubenswrapper[4877]: I1211 18:21:15.778686 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 18:21:15 crc kubenswrapper[4877]: I1211 18:21:15.778787 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 18:21:20 crc kubenswrapper[4877]: I1211 18:21:20.748773 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:20 crc kubenswrapper[4877]: I1211 18:21:20.772207 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:20 crc kubenswrapper[4877]: I1211 18:21:20.778682 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 18:21:20 crc kubenswrapper[4877]: I1211 18:21:20.778762 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.463998 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.665548 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-dpqp2"] Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.667768 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.671461 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.681699 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.682362 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dpqp2"] Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.696912 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dpqp2\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.697015 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngd4t\" (UniqueName: \"kubernetes.io/projected/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-kube-api-access-ngd4t\") pod \"nova-cell1-cell-mapping-dpqp2\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.697075 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-scripts\") pod \"nova-cell1-cell-mapping-dpqp2\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.697300 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-config-data\") pod \"nova-cell1-cell-mapping-dpqp2\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.796615 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="36ccf898-82b0-4b83-baf8-cffd8484e91b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.796628 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="36ccf898-82b0-4b83-baf8-cffd8484e91b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.798301 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngd4t\" (UniqueName: \"kubernetes.io/projected/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-kube-api-access-ngd4t\") pod \"nova-cell1-cell-mapping-dpqp2\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.798407 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-scripts\") pod \"nova-cell1-cell-mapping-dpqp2\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.798534 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-config-data\") pod \"nova-cell1-cell-mapping-dpqp2\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.798597 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dpqp2\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.807824 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-scripts\") pod \"nova-cell1-cell-mapping-dpqp2\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.807869 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dpqp2\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.820109 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-config-data\") pod \"nova-cell1-cell-mapping-dpqp2\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.830251 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngd4t\" (UniqueName: \"kubernetes.io/projected/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-kube-api-access-ngd4t\") pod \"nova-cell1-cell-mapping-dpqp2\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.904419 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.905114 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.905611 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.939915 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 18:21:21 crc kubenswrapper[4877]: I1211 18:21:21.999461 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:22 crc kubenswrapper[4877]: I1211 18:21:22.405954 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dpqp2"] Dec 11 18:21:22 crc kubenswrapper[4877]: W1211 18:21:22.421131 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddffdec34_8bf1_4c69_80b3_dd26a191e1d8.slice/crio-a4dd35764e38301b56c54803129c745ef5f463927b47775f801107aa57397bdf WatchSource:0}: Error finding container a4dd35764e38301b56c54803129c745ef5f463927b47775f801107aa57397bdf: Status 404 returned error can't find the container with id a4dd35764e38301b56c54803129c745ef5f463927b47775f801107aa57397bdf Dec 11 18:21:22 crc kubenswrapper[4877]: I1211 18:21:22.463464 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dpqp2" event={"ID":"dffdec34-8bf1-4c69-80b3-dd26a191e1d8","Type":"ContainerStarted","Data":"a4dd35764e38301b56c54803129c745ef5f463927b47775f801107aa57397bdf"} Dec 11 18:21:22 crc kubenswrapper[4877]: I1211 18:21:22.464686 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 18:21:22 crc kubenswrapper[4877]: I1211 18:21:22.486215 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 18:21:23 crc kubenswrapper[4877]: I1211 18:21:23.480224 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dpqp2" event={"ID":"dffdec34-8bf1-4c69-80b3-dd26a191e1d8","Type":"ContainerStarted","Data":"543ec38e8a3e8e19587f1e408d4710c214d2ecdca6ca9cf643b0d8303ab688f9"} Dec 11 18:21:23 crc kubenswrapper[4877]: I1211 18:21:23.511089 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-dpqp2" podStartSLOduration=2.511049985 podStartE2EDuration="2.511049985s" podCreationTimestamp="2025-12-11 18:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:21:23.500481128 +0000 UTC m=+1244.526725222" watchObservedRunningTime="2025-12-11 18:21:23.511049985 +0000 UTC m=+1244.537294069" Dec 11 18:21:27 crc kubenswrapper[4877]: I1211 18:21:27.535886 4877 generic.go:334] "Generic (PLEG): container finished" podID="dffdec34-8bf1-4c69-80b3-dd26a191e1d8" containerID="543ec38e8a3e8e19587f1e408d4710c214d2ecdca6ca9cf643b0d8303ab688f9" exitCode=0 Dec 11 18:21:27 crc kubenswrapper[4877]: I1211 18:21:27.535995 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dpqp2" event={"ID":"dffdec34-8bf1-4c69-80b3-dd26a191e1d8","Type":"ContainerDied","Data":"543ec38e8a3e8e19587f1e408d4710c214d2ecdca6ca9cf643b0d8303ab688f9"} Dec 11 18:21:28 crc kubenswrapper[4877]: I1211 18:21:28.917050 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.099320 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-scripts\") pod \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.099452 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngd4t\" (UniqueName: \"kubernetes.io/projected/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-kube-api-access-ngd4t\") pod \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.099659 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-config-data\") pod \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.099695 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-combined-ca-bundle\") pod \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\" (UID: \"dffdec34-8bf1-4c69-80b3-dd26a191e1d8\") " Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.106364 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-kube-api-access-ngd4t" (OuterVolumeSpecName: "kube-api-access-ngd4t") pod "dffdec34-8bf1-4c69-80b3-dd26a191e1d8" (UID: "dffdec34-8bf1-4c69-80b3-dd26a191e1d8"). InnerVolumeSpecName "kube-api-access-ngd4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.107762 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-scripts" (OuterVolumeSpecName: "scripts") pod "dffdec34-8bf1-4c69-80b3-dd26a191e1d8" (UID: "dffdec34-8bf1-4c69-80b3-dd26a191e1d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.149733 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-config-data" (OuterVolumeSpecName: "config-data") pod "dffdec34-8bf1-4c69-80b3-dd26a191e1d8" (UID: "dffdec34-8bf1-4c69-80b3-dd26a191e1d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.153177 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dffdec34-8bf1-4c69-80b3-dd26a191e1d8" (UID: "dffdec34-8bf1-4c69-80b3-dd26a191e1d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.202335 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.202394 4877 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.202404 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngd4t\" (UniqueName: \"kubernetes.io/projected/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-kube-api-access-ngd4t\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.202415 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dffdec34-8bf1-4c69-80b3-dd26a191e1d8-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.557735 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dpqp2" event={"ID":"dffdec34-8bf1-4c69-80b3-dd26a191e1d8","Type":"ContainerDied","Data":"a4dd35764e38301b56c54803129c745ef5f463927b47775f801107aa57397bdf"} Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.558237 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4dd35764e38301b56c54803129c745ef5f463927b47775f801107aa57397bdf" Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.557857 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dpqp2" Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.712925 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.713958 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" containerName="nova-api-log" containerID="cri-o://274331a2befbe340a30cf7e2e7bdc2e88573a6fa6acb02b1cea0635fcd8b74fb" gracePeriod=30 Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.714075 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" containerName="nova-api-api" containerID="cri-o://792c93494bd192477231d418ab90f6481f976f11ecd100ebd30d3aaf2bff2308" gracePeriod=30 Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.729427 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.729820 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="58cd2de3-6c98-4325-aaf5-134e9e44638a" containerName="nova-scheduler-scheduler" containerID="cri-o://8ef11f597f8e9f5a17257184cd5254befffccb2774ba8b411214c757b3602250" gracePeriod=30 Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.741390 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.741807 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="36ccf898-82b0-4b83-baf8-cffd8484e91b" containerName="nova-metadata-log" containerID="cri-o://e7bb7b0d812fbb512b7303106f6a1b77ec1e214e882bf15bf3a0bc52af861b59" gracePeriod=30 Dec 11 18:21:29 crc kubenswrapper[4877]: I1211 18:21:29.742071 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="36ccf898-82b0-4b83-baf8-cffd8484e91b" containerName="nova-metadata-metadata" containerID="cri-o://f3728a8471f0e6790920341858f3599c11713e8a3102ad31edffcd730e5defbc" gracePeriod=30 Dec 11 18:21:30 crc kubenswrapper[4877]: I1211 18:21:30.570959 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36ccf898-82b0-4b83-baf8-cffd8484e91b","Type":"ContainerDied","Data":"e7bb7b0d812fbb512b7303106f6a1b77ec1e214e882bf15bf3a0bc52af861b59"} Dec 11 18:21:30 crc kubenswrapper[4877]: I1211 18:21:30.570906 4877 generic.go:334] "Generic (PLEG): container finished" podID="36ccf898-82b0-4b83-baf8-cffd8484e91b" containerID="e7bb7b0d812fbb512b7303106f6a1b77ec1e214e882bf15bf3a0bc52af861b59" exitCode=143 Dec 11 18:21:30 crc kubenswrapper[4877]: I1211 18:21:30.574476 4877 generic.go:334] "Generic (PLEG): container finished" podID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" containerID="274331a2befbe340a30cf7e2e7bdc2e88573a6fa6acb02b1cea0635fcd8b74fb" exitCode=143 Dec 11 18:21:30 crc kubenswrapper[4877]: I1211 18:21:30.574547 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5d094ea-816e-4314-9cf1-4e8f7984ad2c","Type":"ContainerDied","Data":"274331a2befbe340a30cf7e2e7bdc2e88573a6fa6acb02b1cea0635fcd8b74fb"} Dec 11 18:21:31 crc kubenswrapper[4877]: I1211 18:21:31.726203 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.572783 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.613560 4877 generic.go:334] "Generic (PLEG): container finished" podID="58cd2de3-6c98-4325-aaf5-134e9e44638a" containerID="8ef11f597f8e9f5a17257184cd5254befffccb2774ba8b411214c757b3602250" exitCode=0 Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.613636 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"58cd2de3-6c98-4325-aaf5-134e9e44638a","Type":"ContainerDied","Data":"8ef11f597f8e9f5a17257184cd5254befffccb2774ba8b411214c757b3602250"} Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.613681 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"58cd2de3-6c98-4325-aaf5-134e9e44638a","Type":"ContainerDied","Data":"9679aeccaf9910ecf87ac463157b6ace81aee3fe031e5dd616c681ce6d4fdc01"} Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.613706 4877 scope.go:117] "RemoveContainer" containerID="8ef11f597f8e9f5a17257184cd5254befffccb2774ba8b411214c757b3602250" Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.613769 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.646634 4877 scope.go:117] "RemoveContainer" containerID="8ef11f597f8e9f5a17257184cd5254befffccb2774ba8b411214c757b3602250" Dec 11 18:21:32 crc kubenswrapper[4877]: E1211 18:21:32.647950 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ef11f597f8e9f5a17257184cd5254befffccb2774ba8b411214c757b3602250\": container with ID starting with 8ef11f597f8e9f5a17257184cd5254befffccb2774ba8b411214c757b3602250 not found: ID does not exist" containerID="8ef11f597f8e9f5a17257184cd5254befffccb2774ba8b411214c757b3602250" Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.648046 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ef11f597f8e9f5a17257184cd5254befffccb2774ba8b411214c757b3602250"} err="failed to get container status \"8ef11f597f8e9f5a17257184cd5254befffccb2774ba8b411214c757b3602250\": rpc error: code = NotFound desc = could not find container \"8ef11f597f8e9f5a17257184cd5254befffccb2774ba8b411214c757b3602250\": container with ID starting with 8ef11f597f8e9f5a17257184cd5254befffccb2774ba8b411214c757b3602250 not found: ID does not exist" Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.691401 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cd2de3-6c98-4325-aaf5-134e9e44638a-combined-ca-bundle\") pod \"58cd2de3-6c98-4325-aaf5-134e9e44638a\" (UID: \"58cd2de3-6c98-4325-aaf5-134e9e44638a\") " Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.691813 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6dfh\" (UniqueName: \"kubernetes.io/projected/58cd2de3-6c98-4325-aaf5-134e9e44638a-kube-api-access-c6dfh\") pod \"58cd2de3-6c98-4325-aaf5-134e9e44638a\" (UID: \"58cd2de3-6c98-4325-aaf5-134e9e44638a\") " Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.692232 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cd2de3-6c98-4325-aaf5-134e9e44638a-config-data\") pod \"58cd2de3-6c98-4325-aaf5-134e9e44638a\" (UID: \"58cd2de3-6c98-4325-aaf5-134e9e44638a\") " Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.700845 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58cd2de3-6c98-4325-aaf5-134e9e44638a-kube-api-access-c6dfh" (OuterVolumeSpecName: "kube-api-access-c6dfh") pod "58cd2de3-6c98-4325-aaf5-134e9e44638a" (UID: "58cd2de3-6c98-4325-aaf5-134e9e44638a"). InnerVolumeSpecName "kube-api-access-c6dfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.730961 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58cd2de3-6c98-4325-aaf5-134e9e44638a-config-data" (OuterVolumeSpecName: "config-data") pod "58cd2de3-6c98-4325-aaf5-134e9e44638a" (UID: "58cd2de3-6c98-4325-aaf5-134e9e44638a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.733070 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58cd2de3-6c98-4325-aaf5-134e9e44638a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58cd2de3-6c98-4325-aaf5-134e9e44638a" (UID: "58cd2de3-6c98-4325-aaf5-134e9e44638a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.795213 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58cd2de3-6c98-4325-aaf5-134e9e44638a-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.795261 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cd2de3-6c98-4325-aaf5-134e9e44638a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.795280 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6dfh\" (UniqueName: \"kubernetes.io/projected/58cd2de3-6c98-4325-aaf5-134e9e44638a-kube-api-access-c6dfh\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.891739 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.196:8774/\": read tcp 10.217.0.2:45942->10.217.0.196:8774: read: connection reset by peer" Dec 11 18:21:32 crc kubenswrapper[4877]: I1211 18:21:32.891782 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.196:8774/\": read tcp 10.217.0.2:45948->10.217.0.196:8774: read: connection reset by peer" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.039819 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.058734 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.100263 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 18:21:33 crc kubenswrapper[4877]: E1211 18:21:33.102156 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dffdec34-8bf1-4c69-80b3-dd26a191e1d8" containerName="nova-manage" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.102188 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="dffdec34-8bf1-4c69-80b3-dd26a191e1d8" containerName="nova-manage" Dec 11 18:21:33 crc kubenswrapper[4877]: E1211 18:21:33.102273 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cd2de3-6c98-4325-aaf5-134e9e44638a" containerName="nova-scheduler-scheduler" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.102284 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cd2de3-6c98-4325-aaf5-134e9e44638a" containerName="nova-scheduler-scheduler" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.102975 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="58cd2de3-6c98-4325-aaf5-134e9e44638a" containerName="nova-scheduler-scheduler" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.103012 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="dffdec34-8bf1-4c69-80b3-dd26a191e1d8" containerName="nova-manage" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.104946 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.110488 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.157142 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.210100 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6\") " pod="openstack/nova-scheduler-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.210158 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lm9g\" (UniqueName: \"kubernetes.io/projected/1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6-kube-api-access-5lm9g\") pod \"nova-scheduler-0\" (UID: \"1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6\") " pod="openstack/nova-scheduler-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.210286 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6-config-data\") pod \"nova-scheduler-0\" (UID: \"1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6\") " pod="openstack/nova-scheduler-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.228906 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58cd2de3-6c98-4325-aaf5-134e9e44638a" path="/var/lib/kubelet/pods/58cd2de3-6c98-4325-aaf5-134e9e44638a/volumes" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.316530 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6-config-data\") pod \"nova-scheduler-0\" (UID: \"1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6\") " pod="openstack/nova-scheduler-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.316762 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lm9g\" (UniqueName: \"kubernetes.io/projected/1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6-kube-api-access-5lm9g\") pod \"nova-scheduler-0\" (UID: \"1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6\") " pod="openstack/nova-scheduler-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.316817 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6\") " pod="openstack/nova-scheduler-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.326941 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6-config-data\") pod \"nova-scheduler-0\" (UID: \"1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6\") " pod="openstack/nova-scheduler-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.329324 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6\") " pod="openstack/nova-scheduler-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.343795 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lm9g\" (UniqueName: \"kubernetes.io/projected/1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6-kube-api-access-5lm9g\") pod \"nova-scheduler-0\" (UID: \"1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6\") " pod="openstack/nova-scheduler-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.426052 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.438508 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.587250 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.622785 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-public-tls-certs\") pod \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.622872 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-logs\") pod \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.623010 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-internal-tls-certs\") pod \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.623036 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-config-data\") pod \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.623075 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmt29\" (UniqueName: \"kubernetes.io/projected/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-kube-api-access-hmt29\") pod \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.623119 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-combined-ca-bundle\") pod \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\" (UID: \"f5d094ea-816e-4314-9cf1-4e8f7984ad2c\") " Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.624059 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-logs" (OuterVolumeSpecName: "logs") pod "f5d094ea-816e-4314-9cf1-4e8f7984ad2c" (UID: "f5d094ea-816e-4314-9cf1-4e8f7984ad2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.625892 4877 generic.go:334] "Generic (PLEG): container finished" podID="36ccf898-82b0-4b83-baf8-cffd8484e91b" containerID="f3728a8471f0e6790920341858f3599c11713e8a3102ad31edffcd730e5defbc" exitCode=0 Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.626000 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36ccf898-82b0-4b83-baf8-cffd8484e91b","Type":"ContainerDied","Data":"f3728a8471f0e6790920341858f3599c11713e8a3102ad31edffcd730e5defbc"} Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.626045 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36ccf898-82b0-4b83-baf8-cffd8484e91b","Type":"ContainerDied","Data":"a0e7c4201fb6668389edfe7c4ee0a725b1babf97b2856cf85385f99acff7b7b9"} Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.626092 4877 scope.go:117] "RemoveContainer" containerID="f3728a8471f0e6790920341858f3599c11713e8a3102ad31edffcd730e5defbc" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.626284 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.630591 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-kube-api-access-hmt29" (OuterVolumeSpecName: "kube-api-access-hmt29") pod "f5d094ea-816e-4314-9cf1-4e8f7984ad2c" (UID: "f5d094ea-816e-4314-9cf1-4e8f7984ad2c"). InnerVolumeSpecName "kube-api-access-hmt29". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.634835 4877 generic.go:334] "Generic (PLEG): container finished" podID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" containerID="792c93494bd192477231d418ab90f6481f976f11ecd100ebd30d3aaf2bff2308" exitCode=0 Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.634914 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5d094ea-816e-4314-9cf1-4e8f7984ad2c","Type":"ContainerDied","Data":"792c93494bd192477231d418ab90f6481f976f11ecd100ebd30d3aaf2bff2308"} Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.634955 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5d094ea-816e-4314-9cf1-4e8f7984ad2c","Type":"ContainerDied","Data":"0bfdd566a4f7a104ea954eece27548ba05225eec982f4ae735d09ece5ca9c735"} Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.635035 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.667005 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5d094ea-816e-4314-9cf1-4e8f7984ad2c" (UID: "f5d094ea-816e-4314-9cf1-4e8f7984ad2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.673757 4877 scope.go:117] "RemoveContainer" containerID="e7bb7b0d812fbb512b7303106f6a1b77ec1e214e882bf15bf3a0bc52af861b59" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.680620 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-config-data" (OuterVolumeSpecName: "config-data") pod "f5d094ea-816e-4314-9cf1-4e8f7984ad2c" (UID: "f5d094ea-816e-4314-9cf1-4e8f7984ad2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.690987 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f5d094ea-816e-4314-9cf1-4e8f7984ad2c" (UID: "f5d094ea-816e-4314-9cf1-4e8f7984ad2c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.702725 4877 scope.go:117] "RemoveContainer" containerID="f3728a8471f0e6790920341858f3599c11713e8a3102ad31edffcd730e5defbc" Dec 11 18:21:33 crc kubenswrapper[4877]: E1211 18:21:33.703472 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3728a8471f0e6790920341858f3599c11713e8a3102ad31edffcd730e5defbc\": container with ID starting with f3728a8471f0e6790920341858f3599c11713e8a3102ad31edffcd730e5defbc not found: ID does not exist" containerID="f3728a8471f0e6790920341858f3599c11713e8a3102ad31edffcd730e5defbc" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.703535 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3728a8471f0e6790920341858f3599c11713e8a3102ad31edffcd730e5defbc"} err="failed to get container status \"f3728a8471f0e6790920341858f3599c11713e8a3102ad31edffcd730e5defbc\": rpc error: code = NotFound desc = could not find container \"f3728a8471f0e6790920341858f3599c11713e8a3102ad31edffcd730e5defbc\": container with ID starting with f3728a8471f0e6790920341858f3599c11713e8a3102ad31edffcd730e5defbc not found: ID does not exist" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.703577 4877 scope.go:117] "RemoveContainer" containerID="e7bb7b0d812fbb512b7303106f6a1b77ec1e214e882bf15bf3a0bc52af861b59" Dec 11 18:21:33 crc kubenswrapper[4877]: E1211 18:21:33.703907 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7bb7b0d812fbb512b7303106f6a1b77ec1e214e882bf15bf3a0bc52af861b59\": container with ID starting with e7bb7b0d812fbb512b7303106f6a1b77ec1e214e882bf15bf3a0bc52af861b59 not found: ID does not exist" containerID="e7bb7b0d812fbb512b7303106f6a1b77ec1e214e882bf15bf3a0bc52af861b59" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.703926 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7bb7b0d812fbb512b7303106f6a1b77ec1e214e882bf15bf3a0bc52af861b59"} err="failed to get container status \"e7bb7b0d812fbb512b7303106f6a1b77ec1e214e882bf15bf3a0bc52af861b59\": rpc error: code = NotFound desc = could not find container \"e7bb7b0d812fbb512b7303106f6a1b77ec1e214e882bf15bf3a0bc52af861b59\": container with ID starting with e7bb7b0d812fbb512b7303106f6a1b77ec1e214e882bf15bf3a0bc52af861b59 not found: ID does not exist" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.703940 4877 scope.go:117] "RemoveContainer" containerID="792c93494bd192477231d418ab90f6481f976f11ecd100ebd30d3aaf2bff2308" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.711902 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f5d094ea-816e-4314-9cf1-4e8f7984ad2c" (UID: "f5d094ea-816e-4314-9cf1-4e8f7984ad2c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.724705 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-combined-ca-bundle\") pod \"36ccf898-82b0-4b83-baf8-cffd8484e91b\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.724818 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-nova-metadata-tls-certs\") pod \"36ccf898-82b0-4b83-baf8-cffd8484e91b\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.724858 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36ccf898-82b0-4b83-baf8-cffd8484e91b-logs\") pod \"36ccf898-82b0-4b83-baf8-cffd8484e91b\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.724908 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-config-data\") pod \"36ccf898-82b0-4b83-baf8-cffd8484e91b\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.725044 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sls7w\" (UniqueName: \"kubernetes.io/projected/36ccf898-82b0-4b83-baf8-cffd8484e91b-kube-api-access-sls7w\") pod \"36ccf898-82b0-4b83-baf8-cffd8484e91b\" (UID: \"36ccf898-82b0-4b83-baf8-cffd8484e91b\") " Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.725488 4877 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.725500 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.725512 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmt29\" (UniqueName: \"kubernetes.io/projected/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-kube-api-access-hmt29\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.725523 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.725533 4877 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.725541 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d094ea-816e-4314-9cf1-4e8f7984ad2c-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.726275 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36ccf898-82b0-4b83-baf8-cffd8484e91b-logs" (OuterVolumeSpecName: "logs") pod "36ccf898-82b0-4b83-baf8-cffd8484e91b" (UID: "36ccf898-82b0-4b83-baf8-cffd8484e91b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.731648 4877 scope.go:117] "RemoveContainer" containerID="274331a2befbe340a30cf7e2e7bdc2e88573a6fa6acb02b1cea0635fcd8b74fb" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.738713 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ccf898-82b0-4b83-baf8-cffd8484e91b-kube-api-access-sls7w" (OuterVolumeSpecName: "kube-api-access-sls7w") pod "36ccf898-82b0-4b83-baf8-cffd8484e91b" (UID: "36ccf898-82b0-4b83-baf8-cffd8484e91b"). InnerVolumeSpecName "kube-api-access-sls7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.764655 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36ccf898-82b0-4b83-baf8-cffd8484e91b" (UID: "36ccf898-82b0-4b83-baf8-cffd8484e91b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.769198 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-config-data" (OuterVolumeSpecName: "config-data") pod "36ccf898-82b0-4b83-baf8-cffd8484e91b" (UID: "36ccf898-82b0-4b83-baf8-cffd8484e91b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.769564 4877 scope.go:117] "RemoveContainer" containerID="792c93494bd192477231d418ab90f6481f976f11ecd100ebd30d3aaf2bff2308" Dec 11 18:21:33 crc kubenswrapper[4877]: E1211 18:21:33.787061 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"792c93494bd192477231d418ab90f6481f976f11ecd100ebd30d3aaf2bff2308\": container with ID starting with 792c93494bd192477231d418ab90f6481f976f11ecd100ebd30d3aaf2bff2308 not found: ID does not exist" containerID="792c93494bd192477231d418ab90f6481f976f11ecd100ebd30d3aaf2bff2308" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.787132 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"792c93494bd192477231d418ab90f6481f976f11ecd100ebd30d3aaf2bff2308"} err="failed to get container status \"792c93494bd192477231d418ab90f6481f976f11ecd100ebd30d3aaf2bff2308\": rpc error: code = NotFound desc = could not find container \"792c93494bd192477231d418ab90f6481f976f11ecd100ebd30d3aaf2bff2308\": container with ID starting with 792c93494bd192477231d418ab90f6481f976f11ecd100ebd30d3aaf2bff2308 not found: ID does not exist" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.787182 4877 scope.go:117] "RemoveContainer" containerID="274331a2befbe340a30cf7e2e7bdc2e88573a6fa6acb02b1cea0635fcd8b74fb" Dec 11 18:21:33 crc kubenswrapper[4877]: E1211 18:21:33.788279 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"274331a2befbe340a30cf7e2e7bdc2e88573a6fa6acb02b1cea0635fcd8b74fb\": container with ID starting with 274331a2befbe340a30cf7e2e7bdc2e88573a6fa6acb02b1cea0635fcd8b74fb not found: ID does not exist" containerID="274331a2befbe340a30cf7e2e7bdc2e88573a6fa6acb02b1cea0635fcd8b74fb" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.788344 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"274331a2befbe340a30cf7e2e7bdc2e88573a6fa6acb02b1cea0635fcd8b74fb"} err="failed to get container status \"274331a2befbe340a30cf7e2e7bdc2e88573a6fa6acb02b1cea0635fcd8b74fb\": rpc error: code = NotFound desc = could not find container \"274331a2befbe340a30cf7e2e7bdc2e88573a6fa6acb02b1cea0635fcd8b74fb\": container with ID starting with 274331a2befbe340a30cf7e2e7bdc2e88573a6fa6acb02b1cea0635fcd8b74fb not found: ID does not exist" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.815020 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "36ccf898-82b0-4b83-baf8-cffd8484e91b" (UID: "36ccf898-82b0-4b83-baf8-cffd8484e91b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.828501 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sls7w\" (UniqueName: \"kubernetes.io/projected/36ccf898-82b0-4b83-baf8-cffd8484e91b-kube-api-access-sls7w\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.828555 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.828575 4877 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.828590 4877 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36ccf898-82b0-4b83-baf8-cffd8484e91b-logs\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.828604 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36ccf898-82b0-4b83-baf8-cffd8484e91b-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.931677 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 18:21:33 crc kubenswrapper[4877]: I1211 18:21:33.973325 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.003355 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.019460 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:21:34 crc kubenswrapper[4877]: E1211 18:21:34.020041 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" containerName="nova-api-api" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.020070 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" containerName="nova-api-api" Dec 11 18:21:34 crc kubenswrapper[4877]: E1211 18:21:34.020102 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ccf898-82b0-4b83-baf8-cffd8484e91b" containerName="nova-metadata-metadata" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.020110 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ccf898-82b0-4b83-baf8-cffd8484e91b" containerName="nova-metadata-metadata" Dec 11 18:21:34 crc kubenswrapper[4877]: E1211 18:21:34.020126 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ccf898-82b0-4b83-baf8-cffd8484e91b" containerName="nova-metadata-log" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.020132 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ccf898-82b0-4b83-baf8-cffd8484e91b" containerName="nova-metadata-log" Dec 11 18:21:34 crc kubenswrapper[4877]: E1211 18:21:34.020149 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" containerName="nova-api-log" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.020156 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" containerName="nova-api-log" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.020355 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ccf898-82b0-4b83-baf8-cffd8484e91b" containerName="nova-metadata-log" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.020397 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" containerName="nova-api-log" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.020421 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" containerName="nova-api-api" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.020437 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ccf898-82b0-4b83-baf8-cffd8484e91b" containerName="nova-metadata-metadata" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.023529 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.029842 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.030075 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.033197 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.045640 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.064462 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.078837 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.082011 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.086121 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.089490 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.089933 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.099579 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.136050 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09959cf5-104d-4577-b6ae-d710a75c4aaf-logs\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.136657 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxvg5\" (UniqueName: \"kubernetes.io/projected/09959cf5-104d-4577-b6ae-d710a75c4aaf-kube-api-access-wxvg5\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.136688 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09959cf5-104d-4577-b6ae-d710a75c4aaf-config-data\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.137025 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09959cf5-104d-4577-b6ae-d710a75c4aaf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.137140 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/09959cf5-104d-4577-b6ae-d710a75c4aaf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.239157 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd82f9ba-3316-4498-b434-e0eea4518646-logs\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.239228 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgsdq\" (UniqueName: \"kubernetes.io/projected/fd82f9ba-3316-4498-b434-e0eea4518646-kube-api-access-mgsdq\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.239300 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09959cf5-104d-4577-b6ae-d710a75c4aaf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.239333 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd82f9ba-3316-4498-b434-e0eea4518646-config-data\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.239403 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/09959cf5-104d-4577-b6ae-d710a75c4aaf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.239445 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09959cf5-104d-4577-b6ae-d710a75c4aaf-logs\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.239488 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd82f9ba-3316-4498-b434-e0eea4518646-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.239530 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd82f9ba-3316-4498-b434-e0eea4518646-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.239575 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd82f9ba-3316-4498-b434-e0eea4518646-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.239626 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxvg5\" (UniqueName: \"kubernetes.io/projected/09959cf5-104d-4577-b6ae-d710a75c4aaf-kube-api-access-wxvg5\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.239646 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09959cf5-104d-4577-b6ae-d710a75c4aaf-config-data\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.240235 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09959cf5-104d-4577-b6ae-d710a75c4aaf-logs\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.246703 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/09959cf5-104d-4577-b6ae-d710a75c4aaf-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.247036 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09959cf5-104d-4577-b6ae-d710a75c4aaf-config-data\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.248183 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09959cf5-104d-4577-b6ae-d710a75c4aaf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.259208 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxvg5\" (UniqueName: \"kubernetes.io/projected/09959cf5-104d-4577-b6ae-d710a75c4aaf-kube-api-access-wxvg5\") pod \"nova-metadata-0\" (UID: \"09959cf5-104d-4577-b6ae-d710a75c4aaf\") " pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.341526 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd82f9ba-3316-4498-b434-e0eea4518646-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.341834 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd82f9ba-3316-4498-b434-e0eea4518646-logs\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.341942 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgsdq\" (UniqueName: \"kubernetes.io/projected/fd82f9ba-3316-4498-b434-e0eea4518646-kube-api-access-mgsdq\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.342178 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd82f9ba-3316-4498-b434-e0eea4518646-config-data\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.342320 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd82f9ba-3316-4498-b434-e0eea4518646-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.342408 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd82f9ba-3316-4498-b434-e0eea4518646-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.342570 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd82f9ba-3316-4498-b434-e0eea4518646-logs\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.347105 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd82f9ba-3316-4498-b434-e0eea4518646-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.347565 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd82f9ba-3316-4498-b434-e0eea4518646-config-data\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.348025 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd82f9ba-3316-4498-b434-e0eea4518646-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.348961 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd82f9ba-3316-4498-b434-e0eea4518646-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.363366 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.365779 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgsdq\" (UniqueName: \"kubernetes.io/projected/fd82f9ba-3316-4498-b434-e0eea4518646-kube-api-access-mgsdq\") pod \"nova-api-0\" (UID: \"fd82f9ba-3316-4498-b434-e0eea4518646\") " pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.413146 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.662848 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6","Type":"ContainerStarted","Data":"76e229b8a4ac3246af7ef9e133039d2fc71c591a3ce6f4adc71aa11a8c7e8104"} Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.663496 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6","Type":"ContainerStarted","Data":"1d2c005433067705672a6efb9e4cada5b53ebaa92729f93824ba9444251cb1fd"} Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.714513 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.714469188 podStartE2EDuration="1.714469188s" podCreationTimestamp="2025-12-11 18:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:21:34.687521456 +0000 UTC m=+1255.713765510" watchObservedRunningTime="2025-12-11 18:21:34.714469188 +0000 UTC m=+1255.740713242" Dec 11 18:21:34 crc kubenswrapper[4877]: I1211 18:21:34.925580 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 18:21:34 crc kubenswrapper[4877]: W1211 18:21:34.932716 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09959cf5_104d_4577_b6ae_d710a75c4aaf.slice/crio-56fa58c4f64da79f280743d7de829a1e5035fcaa899a157bdf2af74d4d283ec9 WatchSource:0}: Error finding container 56fa58c4f64da79f280743d7de829a1e5035fcaa899a157bdf2af74d4d283ec9: Status 404 returned error can't find the container with id 56fa58c4f64da79f280743d7de829a1e5035fcaa899a157bdf2af74d4d283ec9 Dec 11 18:21:35 crc kubenswrapper[4877]: I1211 18:21:35.047199 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 18:21:35 crc kubenswrapper[4877]: W1211 18:21:35.055042 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd82f9ba_3316_4498_b434_e0eea4518646.slice/crio-8758d300f9a428a0b5f03b15268755f37d017f5589fbd61d02a1168f6226b9d8 WatchSource:0}: Error finding container 8758d300f9a428a0b5f03b15268755f37d017f5589fbd61d02a1168f6226b9d8: Status 404 returned error can't find the container with id 8758d300f9a428a0b5f03b15268755f37d017f5589fbd61d02a1168f6226b9d8 Dec 11 18:21:35 crc kubenswrapper[4877]: I1211 18:21:35.230939 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36ccf898-82b0-4b83-baf8-cffd8484e91b" path="/var/lib/kubelet/pods/36ccf898-82b0-4b83-baf8-cffd8484e91b/volumes" Dec 11 18:21:35 crc kubenswrapper[4877]: I1211 18:21:35.232787 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5d094ea-816e-4314-9cf1-4e8f7984ad2c" path="/var/lib/kubelet/pods/f5d094ea-816e-4314-9cf1-4e8f7984ad2c/volumes" Dec 11 18:21:35 crc kubenswrapper[4877]: I1211 18:21:35.725679 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"09959cf5-104d-4577-b6ae-d710a75c4aaf","Type":"ContainerStarted","Data":"b62102dff3426f2968e128b09f1ad0b5819fc6af40c28e84ffa0bbf1c09946ea"} Dec 11 18:21:35 crc kubenswrapper[4877]: I1211 18:21:35.726067 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"09959cf5-104d-4577-b6ae-d710a75c4aaf","Type":"ContainerStarted","Data":"53518372f1ac653b29b6baf97210f54e86c3b9447cf7f62f7451b47120a5c2e4"} Dec 11 18:21:35 crc kubenswrapper[4877]: I1211 18:21:35.726080 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"09959cf5-104d-4577-b6ae-d710a75c4aaf","Type":"ContainerStarted","Data":"56fa58c4f64da79f280743d7de829a1e5035fcaa899a157bdf2af74d4d283ec9"} Dec 11 18:21:35 crc kubenswrapper[4877]: I1211 18:21:35.731198 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd82f9ba-3316-4498-b434-e0eea4518646","Type":"ContainerStarted","Data":"a2950061ca6e01ac9da16fa824c0e98a3b495bde9b55af0f683b6d9d65940e48"} Dec 11 18:21:35 crc kubenswrapper[4877]: I1211 18:21:35.731227 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd82f9ba-3316-4498-b434-e0eea4518646","Type":"ContainerStarted","Data":"f8518841671cd71896974bac33b96460508caf9db602ce809db4b69b43691385"} Dec 11 18:21:35 crc kubenswrapper[4877]: I1211 18:21:35.731237 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd82f9ba-3316-4498-b434-e0eea4518646","Type":"ContainerStarted","Data":"8758d300f9a428a0b5f03b15268755f37d017f5589fbd61d02a1168f6226b9d8"} Dec 11 18:21:35 crc kubenswrapper[4877]: I1211 18:21:35.753506 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.7534821689999998 podStartE2EDuration="2.753482169s" podCreationTimestamp="2025-12-11 18:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:21:35.749221833 +0000 UTC m=+1256.775465887" watchObservedRunningTime="2025-12-11 18:21:35.753482169 +0000 UTC m=+1256.779726203" Dec 11 18:21:35 crc kubenswrapper[4877]: I1211 18:21:35.818902 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.818870683 podStartE2EDuration="2.818870683s" podCreationTimestamp="2025-12-11 18:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:21:35.796591948 +0000 UTC m=+1256.822836002" watchObservedRunningTime="2025-12-11 18:21:35.818870683 +0000 UTC m=+1256.845114727" Dec 11 18:21:38 crc kubenswrapper[4877]: I1211 18:21:38.439017 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 18:21:39 crc kubenswrapper[4877]: I1211 18:21:39.363888 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 18:21:39 crc kubenswrapper[4877]: I1211 18:21:39.363967 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 18:21:43 crc kubenswrapper[4877]: I1211 18:21:43.439197 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 18:21:43 crc kubenswrapper[4877]: I1211 18:21:43.472139 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 18:21:43 crc kubenswrapper[4877]: I1211 18:21:43.876778 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 18:21:44 crc kubenswrapper[4877]: I1211 18:21:44.364291 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 18:21:44 crc kubenswrapper[4877]: I1211 18:21:44.364421 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 18:21:44 crc kubenswrapper[4877]: I1211 18:21:44.414434 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 18:21:44 crc kubenswrapper[4877]: I1211 18:21:44.414506 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 18:21:45 crc kubenswrapper[4877]: I1211 18:21:45.373775 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="09959cf5-104d-4577-b6ae-d710a75c4aaf" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 18:21:45 crc kubenswrapper[4877]: I1211 18:21:45.383736 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="09959cf5-104d-4577-b6ae-d710a75c4aaf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 18:21:45 crc kubenswrapper[4877]: I1211 18:21:45.430706 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd82f9ba-3316-4498-b434-e0eea4518646" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 18:21:45 crc kubenswrapper[4877]: I1211 18:21:45.430737 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd82f9ba-3316-4498-b434-e0eea4518646" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 18:21:54 crc kubenswrapper[4877]: I1211 18:21:54.372159 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 18:21:54 crc kubenswrapper[4877]: I1211 18:21:54.372943 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 18:21:54 crc kubenswrapper[4877]: I1211 18:21:54.387119 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 18:21:54 crc kubenswrapper[4877]: I1211 18:21:54.387634 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 18:21:54 crc kubenswrapper[4877]: I1211 18:21:54.421671 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 18:21:54 crc kubenswrapper[4877]: I1211 18:21:54.422304 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 18:21:54 crc kubenswrapper[4877]: I1211 18:21:54.434247 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 18:21:54 crc kubenswrapper[4877]: I1211 18:21:54.439625 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 18:21:54 crc kubenswrapper[4877]: I1211 18:21:54.963113 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 18:21:54 crc kubenswrapper[4877]: I1211 18:21:54.971006 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 18:22:03 crc kubenswrapper[4877]: I1211 18:22:03.502806 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 18:22:05 crc kubenswrapper[4877]: I1211 18:22:05.213414 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 18:22:08 crc kubenswrapper[4877]: I1211 18:22:08.185024 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="18fc032b-7957-4e94-929a-47c04d67b45f" containerName="rabbitmq" containerID="cri-o://bd307b31e54f3db61b981ba2623680bb2e45b8dd2aca2f78104e494f47ce1780" gracePeriod=604796 Dec 11 18:22:10 crc kubenswrapper[4877]: I1211 18:22:10.088300 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d003258a-8e88-4f72-b82b-2367c81bd081" containerName="rabbitmq" containerID="cri-o://6160982ecda03781a52a5c0cadbadd8f66258f7122dc55186e70d738a7b5c296" gracePeriod=604796 Dec 11 18:22:12 crc kubenswrapper[4877]: I1211 18:22:12.236324 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="18fc032b-7957-4e94-929a-47c04d67b45f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Dec 11 18:22:12 crc kubenswrapper[4877]: I1211 18:22:12.585011 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d003258a-8e88-4f72-b82b-2367c81bd081" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Dec 11 18:22:14 crc kubenswrapper[4877]: I1211 18:22:14.967540 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.058551 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-server-conf\") pod \"18fc032b-7957-4e94-929a-47c04d67b45f\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.058648 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-plugins-conf\") pod \"18fc032b-7957-4e94-929a-47c04d67b45f\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.060397 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-erlang-cookie\") pod \"18fc032b-7957-4e94-929a-47c04d67b45f\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.061101 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "18fc032b-7957-4e94-929a-47c04d67b45f" (UID: "18fc032b-7957-4e94-929a-47c04d67b45f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.061987 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "18fc032b-7957-4e94-929a-47c04d67b45f" (UID: "18fc032b-7957-4e94-929a-47c04d67b45f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.164624 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-confd\") pod \"18fc032b-7957-4e94-929a-47c04d67b45f\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.164827 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18fc032b-7957-4e94-929a-47c04d67b45f-pod-info\") pod \"18fc032b-7957-4e94-929a-47c04d67b45f\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.164896 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18fc032b-7957-4e94-929a-47c04d67b45f-erlang-cookie-secret\") pod \"18fc032b-7957-4e94-929a-47c04d67b45f\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.164950 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-plugins\") pod \"18fc032b-7957-4e94-929a-47c04d67b45f\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.165035 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-config-data\") pod \"18fc032b-7957-4e94-929a-47c04d67b45f\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.165079 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-tls\") pod \"18fc032b-7957-4e94-929a-47c04d67b45f\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.165143 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"18fc032b-7957-4e94-929a-47c04d67b45f\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.165245 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb62v\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-kube-api-access-vb62v\") pod \"18fc032b-7957-4e94-929a-47c04d67b45f\" (UID: \"18fc032b-7957-4e94-929a-47c04d67b45f\") " Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.169131 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-server-conf" (OuterVolumeSpecName: "server-conf") pod "18fc032b-7957-4e94-929a-47c04d67b45f" (UID: "18fc032b-7957-4e94-929a-47c04d67b45f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.169596 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "18fc032b-7957-4e94-929a-47c04d67b45f" (UID: "18fc032b-7957-4e94-929a-47c04d67b45f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.171872 4877 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.171912 4877 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-server-conf\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.171927 4877 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.171943 4877 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.172670 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/18fc032b-7957-4e94-929a-47c04d67b45f-pod-info" (OuterVolumeSpecName: "pod-info") pod "18fc032b-7957-4e94-929a-47c04d67b45f" (UID: "18fc032b-7957-4e94-929a-47c04d67b45f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.173051 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "18fc032b-7957-4e94-929a-47c04d67b45f" (UID: "18fc032b-7957-4e94-929a-47c04d67b45f"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.176589 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "18fc032b-7957-4e94-929a-47c04d67b45f" (UID: "18fc032b-7957-4e94-929a-47c04d67b45f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.181118 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-kube-api-access-vb62v" (OuterVolumeSpecName: "kube-api-access-vb62v") pod "18fc032b-7957-4e94-929a-47c04d67b45f" (UID: "18fc032b-7957-4e94-929a-47c04d67b45f"). InnerVolumeSpecName "kube-api-access-vb62v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.190819 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fc032b-7957-4e94-929a-47c04d67b45f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "18fc032b-7957-4e94-929a-47c04d67b45f" (UID: "18fc032b-7957-4e94-929a-47c04d67b45f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.206635 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.208434 4877 generic.go:334] "Generic (PLEG): container finished" podID="18fc032b-7957-4e94-929a-47c04d67b45f" containerID="bd307b31e54f3db61b981ba2623680bb2e45b8dd2aca2f78104e494f47ce1780" exitCode=0 Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.208703 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18fc032b-7957-4e94-929a-47c04d67b45f","Type":"ContainerDied","Data":"bd307b31e54f3db61b981ba2623680bb2e45b8dd2aca2f78104e494f47ce1780"} Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.208751 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"18fc032b-7957-4e94-929a-47c04d67b45f","Type":"ContainerDied","Data":"b94abeb534efe4a49d3bf21daed5bad18704205916df51ab6bbf24fad91654c6"} Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.208773 4877 scope.go:117] "RemoveContainer" containerID="bd307b31e54f3db61b981ba2623680bb2e45b8dd2aca2f78104e494f47ce1780" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.248462 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-config-data" (OuterVolumeSpecName: "config-data") pod "18fc032b-7957-4e94-929a-47c04d67b45f" (UID: "18fc032b-7957-4e94-929a-47c04d67b45f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.260728 4877 scope.go:117] "RemoveContainer" containerID="3baa3250be1097a6dbd911daa5681626082cfbbb7bd7aa304ec88f6703b2465b" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.278522 4877 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/18fc032b-7957-4e94-929a-47c04d67b45f-pod-info\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.278562 4877 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/18fc032b-7957-4e94-929a-47c04d67b45f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.278577 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18fc032b-7957-4e94-929a-47c04d67b45f-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.278587 4877 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.278622 4877 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.278632 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb62v\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-kube-api-access-vb62v\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.285724 4877 scope.go:117] "RemoveContainer" containerID="bd307b31e54f3db61b981ba2623680bb2e45b8dd2aca2f78104e494f47ce1780" Dec 11 18:22:15 crc kubenswrapper[4877]: E1211 18:22:15.286214 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd307b31e54f3db61b981ba2623680bb2e45b8dd2aca2f78104e494f47ce1780\": container with ID starting with bd307b31e54f3db61b981ba2623680bb2e45b8dd2aca2f78104e494f47ce1780 not found: ID does not exist" containerID="bd307b31e54f3db61b981ba2623680bb2e45b8dd2aca2f78104e494f47ce1780" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.286245 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd307b31e54f3db61b981ba2623680bb2e45b8dd2aca2f78104e494f47ce1780"} err="failed to get container status \"bd307b31e54f3db61b981ba2623680bb2e45b8dd2aca2f78104e494f47ce1780\": rpc error: code = NotFound desc = could not find container \"bd307b31e54f3db61b981ba2623680bb2e45b8dd2aca2f78104e494f47ce1780\": container with ID starting with bd307b31e54f3db61b981ba2623680bb2e45b8dd2aca2f78104e494f47ce1780 not found: ID does not exist" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.286287 4877 scope.go:117] "RemoveContainer" containerID="3baa3250be1097a6dbd911daa5681626082cfbbb7bd7aa304ec88f6703b2465b" Dec 11 18:22:15 crc kubenswrapper[4877]: E1211 18:22:15.286796 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3baa3250be1097a6dbd911daa5681626082cfbbb7bd7aa304ec88f6703b2465b\": container with ID starting with 3baa3250be1097a6dbd911daa5681626082cfbbb7bd7aa304ec88f6703b2465b not found: ID does not exist" containerID="3baa3250be1097a6dbd911daa5681626082cfbbb7bd7aa304ec88f6703b2465b" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.286820 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3baa3250be1097a6dbd911daa5681626082cfbbb7bd7aa304ec88f6703b2465b"} err="failed to get container status \"3baa3250be1097a6dbd911daa5681626082cfbbb7bd7aa304ec88f6703b2465b\": rpc error: code = NotFound desc = could not find container \"3baa3250be1097a6dbd911daa5681626082cfbbb7bd7aa304ec88f6703b2465b\": container with ID starting with 3baa3250be1097a6dbd911daa5681626082cfbbb7bd7aa304ec88f6703b2465b not found: ID does not exist" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.300545 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "18fc032b-7957-4e94-929a-47c04d67b45f" (UID: "18fc032b-7957-4e94-929a-47c04d67b45f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.319172 4877 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.381406 4877 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.381454 4877 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/18fc032b-7957-4e94-929a-47c04d67b45f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.604435 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.615718 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.644980 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 18:22:15 crc kubenswrapper[4877]: E1211 18:22:15.645929 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18fc032b-7957-4e94-929a-47c04d67b45f" containerName="rabbitmq" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.645965 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="18fc032b-7957-4e94-929a-47c04d67b45f" containerName="rabbitmq" Dec 11 18:22:15 crc kubenswrapper[4877]: E1211 18:22:15.645996 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18fc032b-7957-4e94-929a-47c04d67b45f" containerName="setup-container" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.646011 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="18fc032b-7957-4e94-929a-47c04d67b45f" containerName="setup-container" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.646369 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="18fc032b-7957-4e94-929a-47c04d67b45f" containerName="rabbitmq" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.648048 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.650163 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4p7d9" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.650713 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.650841 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.652127 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.652184 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.652368 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.653286 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.669102 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.790569 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e630b01-bd78-44dc-bdc6-82a0bad7825c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.790651 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e630b01-bd78-44dc-bdc6-82a0bad7825c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.790674 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e630b01-bd78-44dc-bdc6-82a0bad7825c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.790725 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e630b01-bd78-44dc-bdc6-82a0bad7825c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.790786 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e630b01-bd78-44dc-bdc6-82a0bad7825c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.790828 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4cpz\" (UniqueName: \"kubernetes.io/projected/4e630b01-bd78-44dc-bdc6-82a0bad7825c-kube-api-access-r4cpz\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.790864 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e630b01-bd78-44dc-bdc6-82a0bad7825c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.790886 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.790919 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e630b01-bd78-44dc-bdc6-82a0bad7825c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.790954 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e630b01-bd78-44dc-bdc6-82a0bad7825c-config-data\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.790974 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e630b01-bd78-44dc-bdc6-82a0bad7825c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.892604 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e630b01-bd78-44dc-bdc6-82a0bad7825c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.892709 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e630b01-bd78-44dc-bdc6-82a0bad7825c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.892757 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4cpz\" (UniqueName: \"kubernetes.io/projected/4e630b01-bd78-44dc-bdc6-82a0bad7825c-kube-api-access-r4cpz\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.892794 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e630b01-bd78-44dc-bdc6-82a0bad7825c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.892814 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.892862 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e630b01-bd78-44dc-bdc6-82a0bad7825c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.892880 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e630b01-bd78-44dc-bdc6-82a0bad7825c-config-data\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.892894 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e630b01-bd78-44dc-bdc6-82a0bad7825c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.892954 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e630b01-bd78-44dc-bdc6-82a0bad7825c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.892984 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e630b01-bd78-44dc-bdc6-82a0bad7825c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.893019 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e630b01-bd78-44dc-bdc6-82a0bad7825c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.894414 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.894815 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e630b01-bd78-44dc-bdc6-82a0bad7825c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.895135 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e630b01-bd78-44dc-bdc6-82a0bad7825c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.896197 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e630b01-bd78-44dc-bdc6-82a0bad7825c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.896335 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e630b01-bd78-44dc-bdc6-82a0bad7825c-config-data\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.898721 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e630b01-bd78-44dc-bdc6-82a0bad7825c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.899213 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e630b01-bd78-44dc-bdc6-82a0bad7825c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.903025 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e630b01-bd78-44dc-bdc6-82a0bad7825c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.914077 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e630b01-bd78-44dc-bdc6-82a0bad7825c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.914803 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e630b01-bd78-44dc-bdc6-82a0bad7825c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.916610 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4cpz\" (UniqueName: \"kubernetes.io/projected/4e630b01-bd78-44dc-bdc6-82a0bad7825c-kube-api-access-r4cpz\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.933905 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"4e630b01-bd78-44dc-bdc6-82a0bad7825c\") " pod="openstack/rabbitmq-server-0" Dec 11 18:22:15 crc kubenswrapper[4877]: I1211 18:22:15.984016 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.438167 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.705999 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.814668 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d003258a-8e88-4f72-b82b-2367c81bd081-erlang-cookie-secret\") pod \"d003258a-8e88-4f72-b82b-2367c81bd081\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.815239 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-erlang-cookie\") pod \"d003258a-8e88-4f72-b82b-2367c81bd081\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.815574 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-confd\") pod \"d003258a-8e88-4f72-b82b-2367c81bd081\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.815852 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-plugins-conf\") pod \"d003258a-8e88-4f72-b82b-2367c81bd081\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.815976 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-config-data\") pod \"d003258a-8e88-4f72-b82b-2367c81bd081\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.816086 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-server-conf\") pod \"d003258a-8e88-4f72-b82b-2367c81bd081\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.816307 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d003258a-8e88-4f72-b82b-2367c81bd081-pod-info\") pod \"d003258a-8e88-4f72-b82b-2367c81bd081\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.816486 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"d003258a-8e88-4f72-b82b-2367c81bd081\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.816685 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-plugins\") pod \"d003258a-8e88-4f72-b82b-2367c81bd081\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.816850 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-tls\") pod \"d003258a-8e88-4f72-b82b-2367c81bd081\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.816981 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqf9z\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-kube-api-access-jqf9z\") pod \"d003258a-8e88-4f72-b82b-2367c81bd081\" (UID: \"d003258a-8e88-4f72-b82b-2367c81bd081\") " Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.822515 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d003258a-8e88-4f72-b82b-2367c81bd081" (UID: "d003258a-8e88-4f72-b82b-2367c81bd081"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.823859 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d003258a-8e88-4f72-b82b-2367c81bd081" (UID: "d003258a-8e88-4f72-b82b-2367c81bd081"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.825691 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d003258a-8e88-4f72-b82b-2367c81bd081" (UID: "d003258a-8e88-4f72-b82b-2367c81bd081"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.831979 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d003258a-8e88-4f72-b82b-2367c81bd081-pod-info" (OuterVolumeSpecName: "pod-info") pod "d003258a-8e88-4f72-b82b-2367c81bd081" (UID: "d003258a-8e88-4f72-b82b-2367c81bd081"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.832024 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "d003258a-8e88-4f72-b82b-2367c81bd081" (UID: "d003258a-8e88-4f72-b82b-2367c81bd081"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.832049 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-kube-api-access-jqf9z" (OuterVolumeSpecName: "kube-api-access-jqf9z") pod "d003258a-8e88-4f72-b82b-2367c81bd081" (UID: "d003258a-8e88-4f72-b82b-2367c81bd081"). InnerVolumeSpecName "kube-api-access-jqf9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.832223 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d003258a-8e88-4f72-b82b-2367c81bd081" (UID: "d003258a-8e88-4f72-b82b-2367c81bd081"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.844749 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d003258a-8e88-4f72-b82b-2367c81bd081-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d003258a-8e88-4f72-b82b-2367c81bd081" (UID: "d003258a-8e88-4f72-b82b-2367c81bd081"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.857098 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-config-data" (OuterVolumeSpecName: "config-data") pod "d003258a-8e88-4f72-b82b-2367c81bd081" (UID: "d003258a-8e88-4f72-b82b-2367c81bd081"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.923865 4877 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d003258a-8e88-4f72-b82b-2367c81bd081-pod-info\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.923925 4877 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.923943 4877 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.923959 4877 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.923976 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqf9z\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-kube-api-access-jqf9z\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.923990 4877 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d003258a-8e88-4f72-b82b-2367c81bd081-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.924003 4877 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.924015 4877 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.924027 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.956154 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-server-conf" (OuterVolumeSpecName: "server-conf") pod "d003258a-8e88-4f72-b82b-2367c81bd081" (UID: "d003258a-8e88-4f72-b82b-2367c81bd081"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:16 crc kubenswrapper[4877]: I1211 18:22:16.987612 4877 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.027216 4877 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d003258a-8e88-4f72-b82b-2367c81bd081-server-conf\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.027256 4877 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.038763 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d003258a-8e88-4f72-b82b-2367c81bd081" (UID: "d003258a-8e88-4f72-b82b-2367c81bd081"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.131521 4877 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d003258a-8e88-4f72-b82b-2367c81bd081-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.236133 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18fc032b-7957-4e94-929a-47c04d67b45f" path="/var/lib/kubelet/pods/18fc032b-7957-4e94-929a-47c04d67b45f/volumes" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.242219 4877 generic.go:334] "Generic (PLEG): container finished" podID="d003258a-8e88-4f72-b82b-2367c81bd081" containerID="6160982ecda03781a52a5c0cadbadd8f66258f7122dc55186e70d738a7b5c296" exitCode=0 Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.242300 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d003258a-8e88-4f72-b82b-2367c81bd081","Type":"ContainerDied","Data":"6160982ecda03781a52a5c0cadbadd8f66258f7122dc55186e70d738a7b5c296"} Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.242336 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d003258a-8e88-4f72-b82b-2367c81bd081","Type":"ContainerDied","Data":"45484498c33bf23d3103ecc873c2ca14818178492ac49413a37e4230a46c4d1f"} Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.242356 4877 scope.go:117] "RemoveContainer" containerID="6160982ecda03781a52a5c0cadbadd8f66258f7122dc55186e70d738a7b5c296" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.242522 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.245674 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4e630b01-bd78-44dc-bdc6-82a0bad7825c","Type":"ContainerStarted","Data":"44aeb3bd4e0ab262b803f5ef07f62742c7acc7731379606a6250834f37429b05"} Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.286566 4877 scope.go:117] "RemoveContainer" containerID="95c675d59aed12e33d0706bc4e929f1ce4eddd8ca6bfbbd7d02db9b1d954eee9" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.294640 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.309572 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.319189 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 18:22:17 crc kubenswrapper[4877]: E1211 18:22:17.319629 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d003258a-8e88-4f72-b82b-2367c81bd081" containerName="rabbitmq" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.319646 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d003258a-8e88-4f72-b82b-2367c81bd081" containerName="rabbitmq" Dec 11 18:22:17 crc kubenswrapper[4877]: E1211 18:22:17.319687 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d003258a-8e88-4f72-b82b-2367c81bd081" containerName="setup-container" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.319694 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="d003258a-8e88-4f72-b82b-2367c81bd081" containerName="setup-container" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.319925 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="d003258a-8e88-4f72-b82b-2367c81bd081" containerName="rabbitmq" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.320163 4877 scope.go:117] "RemoveContainer" containerID="6160982ecda03781a52a5c0cadbadd8f66258f7122dc55186e70d738a7b5c296" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.321345 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: E1211 18:22:17.321349 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6160982ecda03781a52a5c0cadbadd8f66258f7122dc55186e70d738a7b5c296\": container with ID starting with 6160982ecda03781a52a5c0cadbadd8f66258f7122dc55186e70d738a7b5c296 not found: ID does not exist" containerID="6160982ecda03781a52a5c0cadbadd8f66258f7122dc55186e70d738a7b5c296" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.321577 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6160982ecda03781a52a5c0cadbadd8f66258f7122dc55186e70d738a7b5c296"} err="failed to get container status \"6160982ecda03781a52a5c0cadbadd8f66258f7122dc55186e70d738a7b5c296\": rpc error: code = NotFound desc = could not find container \"6160982ecda03781a52a5c0cadbadd8f66258f7122dc55186e70d738a7b5c296\": container with ID starting with 6160982ecda03781a52a5c0cadbadd8f66258f7122dc55186e70d738a7b5c296 not found: ID does not exist" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.321697 4877 scope.go:117] "RemoveContainer" containerID="95c675d59aed12e33d0706bc4e929f1ce4eddd8ca6bfbbd7d02db9b1d954eee9" Dec 11 18:22:17 crc kubenswrapper[4877]: E1211 18:22:17.322552 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95c675d59aed12e33d0706bc4e929f1ce4eddd8ca6bfbbd7d02db9b1d954eee9\": container with ID starting with 95c675d59aed12e33d0706bc4e929f1ce4eddd8ca6bfbbd7d02db9b1d954eee9 not found: ID does not exist" containerID="95c675d59aed12e33d0706bc4e929f1ce4eddd8ca6bfbbd7d02db9b1d954eee9" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.322584 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95c675d59aed12e33d0706bc4e929f1ce4eddd8ca6bfbbd7d02db9b1d954eee9"} err="failed to get container status \"95c675d59aed12e33d0706bc4e929f1ce4eddd8ca6bfbbd7d02db9b1d954eee9\": rpc error: code = NotFound desc = could not find container \"95c675d59aed12e33d0706bc4e929f1ce4eddd8ca6bfbbd7d02db9b1d954eee9\": container with ID starting with 95c675d59aed12e33d0706bc4e929f1ce4eddd8ca6bfbbd7d02db9b1d954eee9 not found: ID does not exist" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.326361 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.326588 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-k6bpv" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.326745 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.326911 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.327295 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.327599 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.328315 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.336516 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.445983 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c488377-3b02-4126-b40d-6b8568352c77-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.446614 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c488377-3b02-4126-b40d-6b8568352c77-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.446650 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c488377-3b02-4126-b40d-6b8568352c77-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.446698 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c488377-3b02-4126-b40d-6b8568352c77-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.446819 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c488377-3b02-4126-b40d-6b8568352c77-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.446838 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln5rc\" (UniqueName: \"kubernetes.io/projected/9c488377-3b02-4126-b40d-6b8568352c77-kube-api-access-ln5rc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.446874 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.446907 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c488377-3b02-4126-b40d-6b8568352c77-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.446998 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c488377-3b02-4126-b40d-6b8568352c77-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.447038 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c488377-3b02-4126-b40d-6b8568352c77-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.447073 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c488377-3b02-4126-b40d-6b8568352c77-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.549178 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c488377-3b02-4126-b40d-6b8568352c77-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.549240 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c488377-3b02-4126-b40d-6b8568352c77-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.549270 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c488377-3b02-4126-b40d-6b8568352c77-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.549323 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c488377-3b02-4126-b40d-6b8568352c77-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.549365 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c488377-3b02-4126-b40d-6b8568352c77-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.549411 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln5rc\" (UniqueName: \"kubernetes.io/projected/9c488377-3b02-4126-b40d-6b8568352c77-kube-api-access-ln5rc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.549457 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.549503 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c488377-3b02-4126-b40d-6b8568352c77-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.549624 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c488377-3b02-4126-b40d-6b8568352c77-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.549675 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c488377-3b02-4126-b40d-6b8568352c77-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.549736 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c488377-3b02-4126-b40d-6b8568352c77-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.550507 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c488377-3b02-4126-b40d-6b8568352c77-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.551439 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c488377-3b02-4126-b40d-6b8568352c77-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.553100 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.553756 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c488377-3b02-4126-b40d-6b8568352c77-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.554125 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c488377-3b02-4126-b40d-6b8568352c77-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.554365 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c488377-3b02-4126-b40d-6b8568352c77-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.567574 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c488377-3b02-4126-b40d-6b8568352c77-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.571785 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln5rc\" (UniqueName: \"kubernetes.io/projected/9c488377-3b02-4126-b40d-6b8568352c77-kube-api-access-ln5rc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.576550 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c488377-3b02-4126-b40d-6b8568352c77-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.583577 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c488377-3b02-4126-b40d-6b8568352c77-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.594587 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c488377-3b02-4126-b40d-6b8568352c77-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.698606 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c488377-3b02-4126-b40d-6b8568352c77\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:17 crc kubenswrapper[4877]: I1211 18:22:17.947468 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.647588 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.775135 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-g9mtg"] Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.777052 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.779598 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.794469 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-g9mtg"] Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.881847 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjmfr\" (UniqueName: \"kubernetes.io/projected/ec26393e-8e13-4d0b-935e-11afa4f1193e-kube-api-access-rjmfr\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.881929 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-config\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.881962 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.881984 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.882037 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.882060 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.882235 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.983724 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.983807 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjmfr\" (UniqueName: \"kubernetes.io/projected/ec26393e-8e13-4d0b-935e-11afa4f1193e-kube-api-access-rjmfr\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.983880 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-config\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.983919 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.983943 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.983993 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.984023 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.985259 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.985299 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.985310 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.985312 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.986239 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-config\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:18 crc kubenswrapper[4877]: I1211 18:22:18.987191 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:19 crc kubenswrapper[4877]: I1211 18:22:19.023099 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjmfr\" (UniqueName: \"kubernetes.io/projected/ec26393e-8e13-4d0b-935e-11afa4f1193e-kube-api-access-rjmfr\") pod \"dnsmasq-dns-79bd4cc8c9-g9mtg\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:19 crc kubenswrapper[4877]: I1211 18:22:19.139467 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:19 crc kubenswrapper[4877]: I1211 18:22:19.236064 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d003258a-8e88-4f72-b82b-2367c81bd081" path="/var/lib/kubelet/pods/d003258a-8e88-4f72-b82b-2367c81bd081/volumes" Dec 11 18:22:19 crc kubenswrapper[4877]: I1211 18:22:19.305669 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9c488377-3b02-4126-b40d-6b8568352c77","Type":"ContainerStarted","Data":"84a8da3292a037c858085370e7214e25ccdff65842cd4fccb64b2d05cf8e934e"} Dec 11 18:22:19 crc kubenswrapper[4877]: I1211 18:22:19.308166 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4e630b01-bd78-44dc-bdc6-82a0bad7825c","Type":"ContainerStarted","Data":"a36905964dd46a8805e312320e5a4b07a74a976f6d0f30c698a425870d877689"} Dec 11 18:22:19 crc kubenswrapper[4877]: I1211 18:22:19.669770 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-g9mtg"] Dec 11 18:22:19 crc kubenswrapper[4877]: W1211 18:22:19.675859 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec26393e_8e13_4d0b_935e_11afa4f1193e.slice/crio-f7c201b1d364da47485233ef57468d7092a3481f9901f63260692266a435f87b WatchSource:0}: Error finding container f7c201b1d364da47485233ef57468d7092a3481f9901f63260692266a435f87b: Status 404 returned error can't find the container with id f7c201b1d364da47485233ef57468d7092a3481f9901f63260692266a435f87b Dec 11 18:22:20 crc kubenswrapper[4877]: I1211 18:22:20.319811 4877 generic.go:334] "Generic (PLEG): container finished" podID="ec26393e-8e13-4d0b-935e-11afa4f1193e" containerID="f608ed07e851430dee9dde12b6d3fa09a6ff7482cf1cf56ee54452cbf945d3e3" exitCode=0 Dec 11 18:22:20 crc kubenswrapper[4877]: I1211 18:22:20.320026 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" event={"ID":"ec26393e-8e13-4d0b-935e-11afa4f1193e","Type":"ContainerDied","Data":"f608ed07e851430dee9dde12b6d3fa09a6ff7482cf1cf56ee54452cbf945d3e3"} Dec 11 18:22:20 crc kubenswrapper[4877]: I1211 18:22:20.320752 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" event={"ID":"ec26393e-8e13-4d0b-935e-11afa4f1193e","Type":"ContainerStarted","Data":"f7c201b1d364da47485233ef57468d7092a3481f9901f63260692266a435f87b"} Dec 11 18:22:21 crc kubenswrapper[4877]: I1211 18:22:21.336021 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9c488377-3b02-4126-b40d-6b8568352c77","Type":"ContainerStarted","Data":"f6c08bd7090ec0a801a16369a4312c3c4485becb19c0e14e7d3464c6d75118b1"} Dec 11 18:22:21 crc kubenswrapper[4877]: I1211 18:22:21.341851 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" event={"ID":"ec26393e-8e13-4d0b-935e-11afa4f1193e","Type":"ContainerStarted","Data":"54b72d82059fcd031f4d757d4b0d3e13d22f59d3c0aeb84aaff1bfd72290535c"} Dec 11 18:22:21 crc kubenswrapper[4877]: I1211 18:22:21.342182 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:21 crc kubenswrapper[4877]: I1211 18:22:21.407644 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" podStartSLOduration=3.407619629 podStartE2EDuration="3.407619629s" podCreationTimestamp="2025-12-11 18:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:22:21.401209145 +0000 UTC m=+1302.427453199" watchObservedRunningTime="2025-12-11 18:22:21.407619629 +0000 UTC m=+1302.433863673" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.141674 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.248255 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-dlfml"] Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.248643 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" podUID="e1069a2c-3591-4093-951b-5de43a45cdb6" containerName="dnsmasq-dns" containerID="cri-o://647280d5c3baeb8c0e2d5671448b61ebfabc051701f821ab1089a1b6903a703b" gracePeriod=10 Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.424308 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-b6n82"] Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.430709 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.440134 4877 generic.go:334] "Generic (PLEG): container finished" podID="e1069a2c-3591-4093-951b-5de43a45cdb6" containerID="647280d5c3baeb8c0e2d5671448b61ebfabc051701f821ab1089a1b6903a703b" exitCode=0 Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.440187 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" event={"ID":"e1069a2c-3591-4093-951b-5de43a45cdb6","Type":"ContainerDied","Data":"647280d5c3baeb8c0e2d5671448b61ebfabc051701f821ab1089a1b6903a703b"} Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.449048 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-b6n82"] Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.554966 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-config\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.555041 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-dns-svc\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.555148 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.555211 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.555289 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gqjq\" (UniqueName: \"kubernetes.io/projected/aa89614b-79d3-467a-8b6a-0e5e28606a1a-kube-api-access-2gqjq\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.555354 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.555463 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.657703 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.657761 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.657811 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-config\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.657839 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-dns-svc\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.657915 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.657956 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.657981 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gqjq\" (UniqueName: \"kubernetes.io/projected/aa89614b-79d3-467a-8b6a-0e5e28606a1a-kube-api-access-2gqjq\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.659017 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-config\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.659618 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.660157 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.660235 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-dns-svc\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.660232 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.660366 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa89614b-79d3-467a-8b6a-0e5e28606a1a-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.690158 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gqjq\" (UniqueName: \"kubernetes.io/projected/aa89614b-79d3-467a-8b6a-0e5e28606a1a-kube-api-access-2gqjq\") pod \"dnsmasq-dns-55478c4467-b6n82\" (UID: \"aa89614b-79d3-467a-8b6a-0e5e28606a1a\") " pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.759266 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.764775 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.863201 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v44rh\" (UniqueName: \"kubernetes.io/projected/e1069a2c-3591-4093-951b-5de43a45cdb6-kube-api-access-v44rh\") pod \"e1069a2c-3591-4093-951b-5de43a45cdb6\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.863566 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-dns-svc\") pod \"e1069a2c-3591-4093-951b-5de43a45cdb6\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.863691 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-config\") pod \"e1069a2c-3591-4093-951b-5de43a45cdb6\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.863726 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-dns-swift-storage-0\") pod \"e1069a2c-3591-4093-951b-5de43a45cdb6\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.863748 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-ovsdbserver-sb\") pod \"e1069a2c-3591-4093-951b-5de43a45cdb6\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.863847 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-ovsdbserver-nb\") pod \"e1069a2c-3591-4093-951b-5de43a45cdb6\" (UID: \"e1069a2c-3591-4093-951b-5de43a45cdb6\") " Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.872592 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1069a2c-3591-4093-951b-5de43a45cdb6-kube-api-access-v44rh" (OuterVolumeSpecName: "kube-api-access-v44rh") pod "e1069a2c-3591-4093-951b-5de43a45cdb6" (UID: "e1069a2c-3591-4093-951b-5de43a45cdb6"). InnerVolumeSpecName "kube-api-access-v44rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.926673 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1069a2c-3591-4093-951b-5de43a45cdb6" (UID: "e1069a2c-3591-4093-951b-5de43a45cdb6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.928256 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1069a2c-3591-4093-951b-5de43a45cdb6" (UID: "e1069a2c-3591-4093-951b-5de43a45cdb6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.928678 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-config" (OuterVolumeSpecName: "config") pod "e1069a2c-3591-4093-951b-5de43a45cdb6" (UID: "e1069a2c-3591-4093-951b-5de43a45cdb6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.932778 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e1069a2c-3591-4093-951b-5de43a45cdb6" (UID: "e1069a2c-3591-4093-951b-5de43a45cdb6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.935001 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1069a2c-3591-4093-951b-5de43a45cdb6" (UID: "e1069a2c-3591-4093-951b-5de43a45cdb6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.965938 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.965970 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v44rh\" (UniqueName: \"kubernetes.io/projected/e1069a2c-3591-4093-951b-5de43a45cdb6-kube-api-access-v44rh\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.965983 4877 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.965992 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.966000 4877 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:29 crc kubenswrapper[4877]: I1211 18:22:29.966008 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1069a2c-3591-4093-951b-5de43a45cdb6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:30 crc kubenswrapper[4877]: I1211 18:22:30.248398 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-b6n82"] Dec 11 18:22:30 crc kubenswrapper[4877]: W1211 18:22:30.253753 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa89614b_79d3_467a_8b6a_0e5e28606a1a.slice/crio-bcd9a18184c1b3e15ea40683711a0f6f931f978bbadb2e994aa34391a1b074a4 WatchSource:0}: Error finding container bcd9a18184c1b3e15ea40683711a0f6f931f978bbadb2e994aa34391a1b074a4: Status 404 returned error can't find the container with id bcd9a18184c1b3e15ea40683711a0f6f931f978bbadb2e994aa34391a1b074a4 Dec 11 18:22:30 crc kubenswrapper[4877]: I1211 18:22:30.453116 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-b6n82" event={"ID":"aa89614b-79d3-467a-8b6a-0e5e28606a1a","Type":"ContainerStarted","Data":"bcd9a18184c1b3e15ea40683711a0f6f931f978bbadb2e994aa34391a1b074a4"} Dec 11 18:22:30 crc kubenswrapper[4877]: I1211 18:22:30.458702 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" event={"ID":"e1069a2c-3591-4093-951b-5de43a45cdb6","Type":"ContainerDied","Data":"baae782f21455d0467186e0b10657da68be91f7b8136823c791228be59e77c96"} Dec 11 18:22:30 crc kubenswrapper[4877]: I1211 18:22:30.458767 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-dlfml" Dec 11 18:22:30 crc kubenswrapper[4877]: I1211 18:22:30.458777 4877 scope.go:117] "RemoveContainer" containerID="647280d5c3baeb8c0e2d5671448b61ebfabc051701f821ab1089a1b6903a703b" Dec 11 18:22:30 crc kubenswrapper[4877]: I1211 18:22:30.490626 4877 scope.go:117] "RemoveContainer" containerID="bd6c6fa5d2c000aa8df08c1d0961b00430b4136cfd8ce8dd8db3d35683049651" Dec 11 18:22:30 crc kubenswrapper[4877]: I1211 18:22:30.531836 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-dlfml"] Dec 11 18:22:30 crc kubenswrapper[4877]: I1211 18:22:30.549988 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-dlfml"] Dec 11 18:22:31 crc kubenswrapper[4877]: I1211 18:22:31.229281 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1069a2c-3591-4093-951b-5de43a45cdb6" path="/var/lib/kubelet/pods/e1069a2c-3591-4093-951b-5de43a45cdb6/volumes" Dec 11 18:22:31 crc kubenswrapper[4877]: I1211 18:22:31.478370 4877 generic.go:334] "Generic (PLEG): container finished" podID="aa89614b-79d3-467a-8b6a-0e5e28606a1a" containerID="c6cacae7f18d50732e599ed64838905bb5582aabe13487d261edd18c6718458f" exitCode=0 Dec 11 18:22:31 crc kubenswrapper[4877]: I1211 18:22:31.478437 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-b6n82" event={"ID":"aa89614b-79d3-467a-8b6a-0e5e28606a1a","Type":"ContainerDied","Data":"c6cacae7f18d50732e599ed64838905bb5582aabe13487d261edd18c6718458f"} Dec 11 18:22:32 crc kubenswrapper[4877]: I1211 18:22:32.493467 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-b6n82" event={"ID":"aa89614b-79d3-467a-8b6a-0e5e28606a1a","Type":"ContainerStarted","Data":"997096bb66e5bc4fb71982ec759e18e92149c2f961803732d783d363905515f1"} Dec 11 18:22:32 crc kubenswrapper[4877]: I1211 18:22:32.494682 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:32 crc kubenswrapper[4877]: I1211 18:22:32.529480 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-b6n82" podStartSLOduration=3.529453483 podStartE2EDuration="3.529453483s" podCreationTimestamp="2025-12-11 18:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:22:32.523295096 +0000 UTC m=+1313.549539160" watchObservedRunningTime="2025-12-11 18:22:32.529453483 +0000 UTC m=+1313.555697547" Dec 11 18:22:39 crc kubenswrapper[4877]: I1211 18:22:39.764825 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-b6n82" Dec 11 18:22:39 crc kubenswrapper[4877]: I1211 18:22:39.853649 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-g9mtg"] Dec 11 18:22:39 crc kubenswrapper[4877]: I1211 18:22:39.854330 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" podUID="ec26393e-8e13-4d0b-935e-11afa4f1193e" containerName="dnsmasq-dns" containerID="cri-o://54b72d82059fcd031f4d757d4b0d3e13d22f59d3c0aeb84aaff1bfd72290535c" gracePeriod=10 Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.351524 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.415980 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-ovsdbserver-nb\") pod \"ec26393e-8e13-4d0b-935e-11afa4f1193e\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.416122 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-dns-swift-storage-0\") pod \"ec26393e-8e13-4d0b-935e-11afa4f1193e\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.416238 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-config\") pod \"ec26393e-8e13-4d0b-935e-11afa4f1193e\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.416260 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-dns-svc\") pod \"ec26393e-8e13-4d0b-935e-11afa4f1193e\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.416317 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-ovsdbserver-sb\") pod \"ec26393e-8e13-4d0b-935e-11afa4f1193e\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.416450 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjmfr\" (UniqueName: \"kubernetes.io/projected/ec26393e-8e13-4d0b-935e-11afa4f1193e-kube-api-access-rjmfr\") pod \"ec26393e-8e13-4d0b-935e-11afa4f1193e\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.416496 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-openstack-edpm-ipam\") pod \"ec26393e-8e13-4d0b-935e-11afa4f1193e\" (UID: \"ec26393e-8e13-4d0b-935e-11afa4f1193e\") " Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.439486 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec26393e-8e13-4d0b-935e-11afa4f1193e-kube-api-access-rjmfr" (OuterVolumeSpecName: "kube-api-access-rjmfr") pod "ec26393e-8e13-4d0b-935e-11afa4f1193e" (UID: "ec26393e-8e13-4d0b-935e-11afa4f1193e"). InnerVolumeSpecName "kube-api-access-rjmfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.486262 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec26393e-8e13-4d0b-935e-11afa4f1193e" (UID: "ec26393e-8e13-4d0b-935e-11afa4f1193e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.493766 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-config" (OuterVolumeSpecName: "config") pod "ec26393e-8e13-4d0b-935e-11afa4f1193e" (UID: "ec26393e-8e13-4d0b-935e-11afa4f1193e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.504836 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec26393e-8e13-4d0b-935e-11afa4f1193e" (UID: "ec26393e-8e13-4d0b-935e-11afa4f1193e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.514065 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec26393e-8e13-4d0b-935e-11afa4f1193e" (UID: "ec26393e-8e13-4d0b-935e-11afa4f1193e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.515367 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec26393e-8e13-4d0b-935e-11afa4f1193e" (UID: "ec26393e-8e13-4d0b-935e-11afa4f1193e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.520554 4877 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.520591 4877 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.520609 4877 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.520622 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.520639 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjmfr\" (UniqueName: \"kubernetes.io/projected/ec26393e-8e13-4d0b-935e-11afa4f1193e-kube-api-access-rjmfr\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.520652 4877 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.533306 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "ec26393e-8e13-4d0b-935e-11afa4f1193e" (UID: "ec26393e-8e13-4d0b-935e-11afa4f1193e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.601477 4877 generic.go:334] "Generic (PLEG): container finished" podID="ec26393e-8e13-4d0b-935e-11afa4f1193e" containerID="54b72d82059fcd031f4d757d4b0d3e13d22f59d3c0aeb84aaff1bfd72290535c" exitCode=0 Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.601688 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.601681 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" event={"ID":"ec26393e-8e13-4d0b-935e-11afa4f1193e","Type":"ContainerDied","Data":"54b72d82059fcd031f4d757d4b0d3e13d22f59d3c0aeb84aaff1bfd72290535c"} Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.601994 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-g9mtg" event={"ID":"ec26393e-8e13-4d0b-935e-11afa4f1193e","Type":"ContainerDied","Data":"f7c201b1d364da47485233ef57468d7092a3481f9901f63260692266a435f87b"} Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.602016 4877 scope.go:117] "RemoveContainer" containerID="54b72d82059fcd031f4d757d4b0d3e13d22f59d3c0aeb84aaff1bfd72290535c" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.626706 4877 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec26393e-8e13-4d0b-935e-11afa4f1193e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.647041 4877 scope.go:117] "RemoveContainer" containerID="f608ed07e851430dee9dde12b6d3fa09a6ff7482cf1cf56ee54452cbf945d3e3" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.660750 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-g9mtg"] Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.675010 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-g9mtg"] Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.688335 4877 scope.go:117] "RemoveContainer" containerID="54b72d82059fcd031f4d757d4b0d3e13d22f59d3c0aeb84aaff1bfd72290535c" Dec 11 18:22:40 crc kubenswrapper[4877]: E1211 18:22:40.689011 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54b72d82059fcd031f4d757d4b0d3e13d22f59d3c0aeb84aaff1bfd72290535c\": container with ID starting with 54b72d82059fcd031f4d757d4b0d3e13d22f59d3c0aeb84aaff1bfd72290535c not found: ID does not exist" containerID="54b72d82059fcd031f4d757d4b0d3e13d22f59d3c0aeb84aaff1bfd72290535c" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.689130 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54b72d82059fcd031f4d757d4b0d3e13d22f59d3c0aeb84aaff1bfd72290535c"} err="failed to get container status \"54b72d82059fcd031f4d757d4b0d3e13d22f59d3c0aeb84aaff1bfd72290535c\": rpc error: code = NotFound desc = could not find container \"54b72d82059fcd031f4d757d4b0d3e13d22f59d3c0aeb84aaff1bfd72290535c\": container with ID starting with 54b72d82059fcd031f4d757d4b0d3e13d22f59d3c0aeb84aaff1bfd72290535c not found: ID does not exist" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.689241 4877 scope.go:117] "RemoveContainer" containerID="f608ed07e851430dee9dde12b6d3fa09a6ff7482cf1cf56ee54452cbf945d3e3" Dec 11 18:22:40 crc kubenswrapper[4877]: E1211 18:22:40.689971 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f608ed07e851430dee9dde12b6d3fa09a6ff7482cf1cf56ee54452cbf945d3e3\": container with ID starting with f608ed07e851430dee9dde12b6d3fa09a6ff7482cf1cf56ee54452cbf945d3e3 not found: ID does not exist" containerID="f608ed07e851430dee9dde12b6d3fa09a6ff7482cf1cf56ee54452cbf945d3e3" Dec 11 18:22:40 crc kubenswrapper[4877]: I1211 18:22:40.690029 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f608ed07e851430dee9dde12b6d3fa09a6ff7482cf1cf56ee54452cbf945d3e3"} err="failed to get container status \"f608ed07e851430dee9dde12b6d3fa09a6ff7482cf1cf56ee54452cbf945d3e3\": rpc error: code = NotFound desc = could not find container \"f608ed07e851430dee9dde12b6d3fa09a6ff7482cf1cf56ee54452cbf945d3e3\": container with ID starting with f608ed07e851430dee9dde12b6d3fa09a6ff7482cf1cf56ee54452cbf945d3e3 not found: ID does not exist" Dec 11 18:22:41 crc kubenswrapper[4877]: I1211 18:22:41.227215 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec26393e-8e13-4d0b-935e-11afa4f1193e" path="/var/lib/kubelet/pods/ec26393e-8e13-4d0b-935e-11afa4f1193e/volumes" Dec 11 18:22:46 crc kubenswrapper[4877]: I1211 18:22:46.638412 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:22:46 crc kubenswrapper[4877]: I1211 18:22:46.639190 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:22:51 crc kubenswrapper[4877]: I1211 18:22:51.750155 4877 generic.go:334] "Generic (PLEG): container finished" podID="4e630b01-bd78-44dc-bdc6-82a0bad7825c" containerID="a36905964dd46a8805e312320e5a4b07a74a976f6d0f30c698a425870d877689" exitCode=0 Dec 11 18:22:51 crc kubenswrapper[4877]: I1211 18:22:51.750266 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4e630b01-bd78-44dc-bdc6-82a0bad7825c","Type":"ContainerDied","Data":"a36905964dd46a8805e312320e5a4b07a74a976f6d0f30c698a425870d877689"} Dec 11 18:22:52 crc kubenswrapper[4877]: I1211 18:22:52.764607 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4e630b01-bd78-44dc-bdc6-82a0bad7825c","Type":"ContainerStarted","Data":"fe9dfe7f6d6eb0f423fbfa69d31755566c8eb20d319ef067612bbcba891a0493"} Dec 11 18:22:52 crc kubenswrapper[4877]: I1211 18:22:52.767059 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 11 18:22:52 crc kubenswrapper[4877]: I1211 18:22:52.798889 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.798863233 podStartE2EDuration="37.798863233s" podCreationTimestamp="2025-12-11 18:22:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:22:52.791554265 +0000 UTC m=+1333.817798329" watchObservedRunningTime="2025-12-11 18:22:52.798863233 +0000 UTC m=+1333.825107277" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.196040 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r"] Dec 11 18:22:53 crc kubenswrapper[4877]: E1211 18:22:53.196674 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec26393e-8e13-4d0b-935e-11afa4f1193e" containerName="dnsmasq-dns" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.196696 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec26393e-8e13-4d0b-935e-11afa4f1193e" containerName="dnsmasq-dns" Dec 11 18:22:53 crc kubenswrapper[4877]: E1211 18:22:53.196737 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec26393e-8e13-4d0b-935e-11afa4f1193e" containerName="init" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.196746 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec26393e-8e13-4d0b-935e-11afa4f1193e" containerName="init" Dec 11 18:22:53 crc kubenswrapper[4877]: E1211 18:22:53.196787 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1069a2c-3591-4093-951b-5de43a45cdb6" containerName="init" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.196795 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1069a2c-3591-4093-951b-5de43a45cdb6" containerName="init" Dec 11 18:22:53 crc kubenswrapper[4877]: E1211 18:22:53.196812 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1069a2c-3591-4093-951b-5de43a45cdb6" containerName="dnsmasq-dns" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.196819 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1069a2c-3591-4093-951b-5de43a45cdb6" containerName="dnsmasq-dns" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.197026 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec26393e-8e13-4d0b-935e-11afa4f1193e" containerName="dnsmasq-dns" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.197051 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1069a2c-3591-4093-951b-5de43a45cdb6" containerName="dnsmasq-dns" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.203263 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.211523 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r"] Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.271868 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.273051 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.273286 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.278506 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.368606 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dvqj\" (UniqueName: \"kubernetes.io/projected/db547bdf-a5ee-410d-8a44-7bc5af05321d-kube-api-access-5dvqj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.368736 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.368816 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.368863 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.471338 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.471517 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dvqj\" (UniqueName: \"kubernetes.io/projected/db547bdf-a5ee-410d-8a44-7bc5af05321d-kube-api-access-5dvqj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.471609 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.471687 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.481648 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.482496 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.495524 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.496724 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dvqj\" (UniqueName: \"kubernetes.io/projected/db547bdf-a5ee-410d-8a44-7bc5af05321d-kube-api-access-5dvqj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:22:53 crc kubenswrapper[4877]: I1211 18:22:53.588975 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:22:54 crc kubenswrapper[4877]: W1211 18:22:54.166286 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb547bdf_a5ee_410d_8a44_7bc5af05321d.slice/crio-64aba14d60a92f009bab1550755daafa07ea8125228535459dd1d84fb02ebdb5 WatchSource:0}: Error finding container 64aba14d60a92f009bab1550755daafa07ea8125228535459dd1d84fb02ebdb5: Status 404 returned error can't find the container with id 64aba14d60a92f009bab1550755daafa07ea8125228535459dd1d84fb02ebdb5 Dec 11 18:22:54 crc kubenswrapper[4877]: I1211 18:22:54.171152 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r"] Dec 11 18:22:54 crc kubenswrapper[4877]: I1211 18:22:54.804613 4877 generic.go:334] "Generic (PLEG): container finished" podID="9c488377-3b02-4126-b40d-6b8568352c77" containerID="f6c08bd7090ec0a801a16369a4312c3c4485becb19c0e14e7d3464c6d75118b1" exitCode=0 Dec 11 18:22:54 crc kubenswrapper[4877]: I1211 18:22:54.804744 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9c488377-3b02-4126-b40d-6b8568352c77","Type":"ContainerDied","Data":"f6c08bd7090ec0a801a16369a4312c3c4485becb19c0e14e7d3464c6d75118b1"} Dec 11 18:22:54 crc kubenswrapper[4877]: I1211 18:22:54.806826 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" event={"ID":"db547bdf-a5ee-410d-8a44-7bc5af05321d","Type":"ContainerStarted","Data":"64aba14d60a92f009bab1550755daafa07ea8125228535459dd1d84fb02ebdb5"} Dec 11 18:22:55 crc kubenswrapper[4877]: I1211 18:22:55.835825 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9c488377-3b02-4126-b40d-6b8568352c77","Type":"ContainerStarted","Data":"348484cf69ee71ff7075299aaf99eb4d397c13eb9b2ee9c2c61cf09a84f49af0"} Dec 11 18:22:55 crc kubenswrapper[4877]: I1211 18:22:55.837539 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:22:55 crc kubenswrapper[4877]: I1211 18:22:55.846485 4877 generic.go:334] "Generic (PLEG): container finished" podID="53a860ae-4169-4f47-8ba7-032c96b4be3a" containerID="5828cc3b3843c51140b3bcb3297dd0f7ed118f8aebe644af556ed4341651d5ab" exitCode=1 Dec 11 18:22:55 crc kubenswrapper[4877]: I1211 18:22:55.846528 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerDied","Data":"5828cc3b3843c51140b3bcb3297dd0f7ed118f8aebe644af556ed4341651d5ab"} Dec 11 18:22:55 crc kubenswrapper[4877]: I1211 18:22:55.846560 4877 scope.go:117] "RemoveContainer" containerID="2994614fe89fdd0cbb9ef815b7ea11d886f0125684a6e917c03f027f429e3d39" Dec 11 18:22:55 crc kubenswrapper[4877]: I1211 18:22:55.846987 4877 scope.go:117] "RemoveContainer" containerID="5828cc3b3843c51140b3bcb3297dd0f7ed118f8aebe644af556ed4341651d5ab" Dec 11 18:22:55 crc kubenswrapper[4877]: E1211 18:22:55.847188 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:22:55 crc kubenswrapper[4877]: I1211 18:22:55.869995 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.869975548 podStartE2EDuration="38.869975548s" podCreationTimestamp="2025-12-11 18:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 18:22:55.865881427 +0000 UTC m=+1336.892125471" watchObservedRunningTime="2025-12-11 18:22:55.869975548 +0000 UTC m=+1336.896219602" Dec 11 18:23:01 crc kubenswrapper[4877]: I1211 18:23:01.137517 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:23:01 crc kubenswrapper[4877]: I1211 18:23:01.138311 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:23:01 crc kubenswrapper[4877]: I1211 18:23:01.139696 4877 scope.go:117] "RemoveContainer" containerID="5828cc3b3843c51140b3bcb3297dd0f7ed118f8aebe644af556ed4341651d5ab" Dec 11 18:23:01 crc kubenswrapper[4877]: E1211 18:23:01.140237 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:23:04 crc kubenswrapper[4877]: I1211 18:23:04.973489 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" event={"ID":"db547bdf-a5ee-410d-8a44-7bc5af05321d","Type":"ContainerStarted","Data":"03de5b56d1a2106e0e32db931f72a8fde54a19b6e14fc6fb4e354897dcc510e9"} Dec 11 18:23:05 crc kubenswrapper[4877]: I1211 18:23:05.004667 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" podStartSLOduration=2.203253385 podStartE2EDuration="12.004638663s" podCreationTimestamp="2025-12-11 18:22:53 +0000 UTC" firstStartedPulling="2025-12-11 18:22:54.16887157 +0000 UTC m=+1335.195115614" lastFinishedPulling="2025-12-11 18:23:03.970256848 +0000 UTC m=+1344.996500892" observedRunningTime="2025-12-11 18:23:04.994770455 +0000 UTC m=+1346.021014499" watchObservedRunningTime="2025-12-11 18:23:05.004638663 +0000 UTC m=+1346.030882717" Dec 11 18:23:05 crc kubenswrapper[4877]: I1211 18:23:05.988684 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 11 18:23:07 crc kubenswrapper[4877]: I1211 18:23:07.951925 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 11 18:23:14 crc kubenswrapper[4877]: I1211 18:23:14.216183 4877 scope.go:117] "RemoveContainer" containerID="5828cc3b3843c51140b3bcb3297dd0f7ed118f8aebe644af556ed4341651d5ab" Dec 11 18:23:14 crc kubenswrapper[4877]: E1211 18:23:14.218419 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:23:16 crc kubenswrapper[4877]: I1211 18:23:16.099451 4877 generic.go:334] "Generic (PLEG): container finished" podID="db547bdf-a5ee-410d-8a44-7bc5af05321d" containerID="03de5b56d1a2106e0e32db931f72a8fde54a19b6e14fc6fb4e354897dcc510e9" exitCode=0 Dec 11 18:23:16 crc kubenswrapper[4877]: I1211 18:23:16.099585 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" event={"ID":"db547bdf-a5ee-410d-8a44-7bc5af05321d","Type":"ContainerDied","Data":"03de5b56d1a2106e0e32db931f72a8fde54a19b6e14fc6fb4e354897dcc510e9"} Dec 11 18:23:16 crc kubenswrapper[4877]: I1211 18:23:16.638319 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:23:16 crc kubenswrapper[4877]: I1211 18:23:16.638770 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:23:17 crc kubenswrapper[4877]: I1211 18:23:17.528894 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:23:17 crc kubenswrapper[4877]: I1211 18:23:17.566710 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-repo-setup-combined-ca-bundle\") pod \"db547bdf-a5ee-410d-8a44-7bc5af05321d\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " Dec 11 18:23:17 crc kubenswrapper[4877]: I1211 18:23:17.566988 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dvqj\" (UniqueName: \"kubernetes.io/projected/db547bdf-a5ee-410d-8a44-7bc5af05321d-kube-api-access-5dvqj\") pod \"db547bdf-a5ee-410d-8a44-7bc5af05321d\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " Dec 11 18:23:17 crc kubenswrapper[4877]: I1211 18:23:17.567311 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-inventory\") pod \"db547bdf-a5ee-410d-8a44-7bc5af05321d\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " Dec 11 18:23:17 crc kubenswrapper[4877]: I1211 18:23:17.567424 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-ssh-key\") pod \"db547bdf-a5ee-410d-8a44-7bc5af05321d\" (UID: \"db547bdf-a5ee-410d-8a44-7bc5af05321d\") " Dec 11 18:23:17 crc kubenswrapper[4877]: I1211 18:23:17.574751 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "db547bdf-a5ee-410d-8a44-7bc5af05321d" (UID: "db547bdf-a5ee-410d-8a44-7bc5af05321d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:23:17 crc kubenswrapper[4877]: I1211 18:23:17.576640 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db547bdf-a5ee-410d-8a44-7bc5af05321d-kube-api-access-5dvqj" (OuterVolumeSpecName: "kube-api-access-5dvqj") pod "db547bdf-a5ee-410d-8a44-7bc5af05321d" (UID: "db547bdf-a5ee-410d-8a44-7bc5af05321d"). InnerVolumeSpecName "kube-api-access-5dvqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:23:17 crc kubenswrapper[4877]: I1211 18:23:17.601720 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-inventory" (OuterVolumeSpecName: "inventory") pod "db547bdf-a5ee-410d-8a44-7bc5af05321d" (UID: "db547bdf-a5ee-410d-8a44-7bc5af05321d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:23:17 crc kubenswrapper[4877]: I1211 18:23:17.613637 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "db547bdf-a5ee-410d-8a44-7bc5af05321d" (UID: "db547bdf-a5ee-410d-8a44-7bc5af05321d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:23:17 crc kubenswrapper[4877]: I1211 18:23:17.671075 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dvqj\" (UniqueName: \"kubernetes.io/projected/db547bdf-a5ee-410d-8a44-7bc5af05321d-kube-api-access-5dvqj\") on node \"crc\" DevicePath \"\"" Dec 11 18:23:17 crc kubenswrapper[4877]: I1211 18:23:17.671125 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:23:17 crc kubenswrapper[4877]: I1211 18:23:17.671138 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:23:17 crc kubenswrapper[4877]: I1211 18:23:17.671149 4877 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db547bdf-a5ee-410d-8a44-7bc5af05321d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.122361 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" event={"ID":"db547bdf-a5ee-410d-8a44-7bc5af05321d","Type":"ContainerDied","Data":"64aba14d60a92f009bab1550755daafa07ea8125228535459dd1d84fb02ebdb5"} Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.122535 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64aba14d60a92f009bab1550755daafa07ea8125228535459dd1d84fb02ebdb5" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.122486 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.224683 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w"] Dec 11 18:23:18 crc kubenswrapper[4877]: E1211 18:23:18.225239 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db547bdf-a5ee-410d-8a44-7bc5af05321d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.225262 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="db547bdf-a5ee-410d-8a44-7bc5af05321d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.225540 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="db547bdf-a5ee-410d-8a44-7bc5af05321d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.226447 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.229256 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.229712 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.230551 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.230579 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.278484 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w"] Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.284371 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wnw2w\" (UID: \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.284567 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frf8w\" (UniqueName: \"kubernetes.io/projected/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-kube-api-access-frf8w\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wnw2w\" (UID: \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.284670 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wnw2w\" (UID: \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.385392 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wnw2w\" (UID: \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.385507 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frf8w\" (UniqueName: \"kubernetes.io/projected/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-kube-api-access-frf8w\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wnw2w\" (UID: \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.385634 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wnw2w\" (UID: \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.389903 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wnw2w\" (UID: \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.389933 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wnw2w\" (UID: \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.409212 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frf8w\" (UniqueName: \"kubernetes.io/projected/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-kube-api-access-frf8w\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-wnw2w\" (UID: \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" Dec 11 18:23:18 crc kubenswrapper[4877]: I1211 18:23:18.587553 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" Dec 11 18:23:19 crc kubenswrapper[4877]: I1211 18:23:19.150478 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w"] Dec 11 18:23:20 crc kubenswrapper[4877]: I1211 18:23:20.164583 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" event={"ID":"153cc7ad-8854-4f42-80cd-2fcdb2f453cd","Type":"ContainerStarted","Data":"831a9edfc845a89339f5f7028c480daa7b20d1ecfb4a7553b3fb390e355e904d"} Dec 11 18:23:21 crc kubenswrapper[4877]: I1211 18:23:21.178497 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" event={"ID":"153cc7ad-8854-4f42-80cd-2fcdb2f453cd","Type":"ContainerStarted","Data":"1b09c458ffbf0c0b9d7a5e64601bf2fe4aa166598267d9d3743ef3d9ac03b572"} Dec 11 18:23:21 crc kubenswrapper[4877]: I1211 18:23:21.219588 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" podStartSLOduration=2.338227496 podStartE2EDuration="3.219563424s" podCreationTimestamp="2025-12-11 18:23:18 +0000 UTC" firstStartedPulling="2025-12-11 18:23:19.157978489 +0000 UTC m=+1360.184222543" lastFinishedPulling="2025-12-11 18:23:20.039314427 +0000 UTC m=+1361.065558471" observedRunningTime="2025-12-11 18:23:21.207867427 +0000 UTC m=+1362.234111511" watchObservedRunningTime="2025-12-11 18:23:21.219563424 +0000 UTC m=+1362.245807468" Dec 11 18:23:23 crc kubenswrapper[4877]: I1211 18:23:23.207845 4877 generic.go:334] "Generic (PLEG): container finished" podID="153cc7ad-8854-4f42-80cd-2fcdb2f453cd" containerID="1b09c458ffbf0c0b9d7a5e64601bf2fe4aa166598267d9d3743ef3d9ac03b572" exitCode=0 Dec 11 18:23:23 crc kubenswrapper[4877]: I1211 18:23:23.208037 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" event={"ID":"153cc7ad-8854-4f42-80cd-2fcdb2f453cd","Type":"ContainerDied","Data":"1b09c458ffbf0c0b9d7a5e64601bf2fe4aa166598267d9d3743ef3d9ac03b572"} Dec 11 18:23:24 crc kubenswrapper[4877]: I1211 18:23:24.695213 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" Dec 11 18:23:24 crc kubenswrapper[4877]: I1211 18:23:24.833172 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frf8w\" (UniqueName: \"kubernetes.io/projected/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-kube-api-access-frf8w\") pod \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\" (UID: \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\") " Dec 11 18:23:24 crc kubenswrapper[4877]: I1211 18:23:24.833549 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-inventory\") pod \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\" (UID: \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\") " Dec 11 18:23:24 crc kubenswrapper[4877]: I1211 18:23:24.833593 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-ssh-key\") pod \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\" (UID: \"153cc7ad-8854-4f42-80cd-2fcdb2f453cd\") " Dec 11 18:23:24 crc kubenswrapper[4877]: I1211 18:23:24.840908 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-kube-api-access-frf8w" (OuterVolumeSpecName: "kube-api-access-frf8w") pod "153cc7ad-8854-4f42-80cd-2fcdb2f453cd" (UID: "153cc7ad-8854-4f42-80cd-2fcdb2f453cd"). InnerVolumeSpecName "kube-api-access-frf8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:23:24 crc kubenswrapper[4877]: I1211 18:23:24.867865 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-inventory" (OuterVolumeSpecName: "inventory") pod "153cc7ad-8854-4f42-80cd-2fcdb2f453cd" (UID: "153cc7ad-8854-4f42-80cd-2fcdb2f453cd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:23:24 crc kubenswrapper[4877]: I1211 18:23:24.878475 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "153cc7ad-8854-4f42-80cd-2fcdb2f453cd" (UID: "153cc7ad-8854-4f42-80cd-2fcdb2f453cd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:23:24 crc kubenswrapper[4877]: I1211 18:23:24.936156 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:23:24 crc kubenswrapper[4877]: I1211 18:23:24.936198 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:23:24 crc kubenswrapper[4877]: I1211 18:23:24.936208 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frf8w\" (UniqueName: \"kubernetes.io/projected/153cc7ad-8854-4f42-80cd-2fcdb2f453cd-kube-api-access-frf8w\") on node \"crc\" DevicePath \"\"" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.233528 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" event={"ID":"153cc7ad-8854-4f42-80cd-2fcdb2f453cd","Type":"ContainerDied","Data":"831a9edfc845a89339f5f7028c480daa7b20d1ecfb4a7553b3fb390e355e904d"} Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.234092 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="831a9edfc845a89339f5f7028c480daa7b20d1ecfb4a7553b3fb390e355e904d" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.233647 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-wnw2w" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.322794 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k"] Dec 11 18:23:25 crc kubenswrapper[4877]: E1211 18:23:25.323533 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153cc7ad-8854-4f42-80cd-2fcdb2f453cd" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.323565 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="153cc7ad-8854-4f42-80cd-2fcdb2f453cd" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.323873 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="153cc7ad-8854-4f42-80cd-2fcdb2f453cd" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.324980 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.331010 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.331249 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.331536 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.331819 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.341131 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k"] Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.449242 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.449309 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.449423 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.449459 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xz67\" (UniqueName: \"kubernetes.io/projected/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-kube-api-access-6xz67\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.551920 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.551998 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xz67\" (UniqueName: \"kubernetes.io/projected/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-kube-api-access-6xz67\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.552150 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.552191 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.557702 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.558302 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.558956 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.570722 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xz67\" (UniqueName: \"kubernetes.io/projected/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-kube-api-access-6xz67\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:23:25 crc kubenswrapper[4877]: I1211 18:23:25.651905 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:23:26 crc kubenswrapper[4877]: I1211 18:23:26.416183 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k"] Dec 11 18:23:27 crc kubenswrapper[4877]: I1211 18:23:27.216274 4877 scope.go:117] "RemoveContainer" containerID="5828cc3b3843c51140b3bcb3297dd0f7ed118f8aebe644af556ed4341651d5ab" Dec 11 18:23:27 crc kubenswrapper[4877]: I1211 18:23:27.258150 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" event={"ID":"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a","Type":"ContainerStarted","Data":"eb8f505619b5d390b7366c50e195e8c3fdbc8b27084d764b4e6328acd6585b85"} Dec 11 18:23:28 crc kubenswrapper[4877]: I1211 18:23:28.273551 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" event={"ID":"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a","Type":"ContainerStarted","Data":"b9cb93a46f74ec4853c17dbe392485aae77089c08bff2f8e25170dd019b0cfff"} Dec 11 18:23:28 crc kubenswrapper[4877]: I1211 18:23:28.282979 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerStarted","Data":"600e5d2b45223747bbf283f905e0245b0efc5b93a926d6c06bcac2ae8d13d24f"} Dec 11 18:23:28 crc kubenswrapper[4877]: I1211 18:23:28.283287 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:23:28 crc kubenswrapper[4877]: I1211 18:23:28.325913 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" podStartSLOduration=2.648690678 podStartE2EDuration="3.325888225s" podCreationTimestamp="2025-12-11 18:23:25 +0000 UTC" firstStartedPulling="2025-12-11 18:23:26.421241582 +0000 UTC m=+1367.447485626" lastFinishedPulling="2025-12-11 18:23:27.098439129 +0000 UTC m=+1368.124683173" observedRunningTime="2025-12-11 18:23:28.306686394 +0000 UTC m=+1369.332930438" watchObservedRunningTime="2025-12-11 18:23:28.325888225 +0000 UTC m=+1369.352132269" Dec 11 18:23:41 crc kubenswrapper[4877]: I1211 18:23:41.144525 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:23:43 crc kubenswrapper[4877]: I1211 18:23:43.346841 4877 scope.go:117] "RemoveContainer" containerID="3dd06d10b774b28213245a9825e2ac4f371d11e2ec18cc1e60b76c6ff4034e6e" Dec 11 18:23:43 crc kubenswrapper[4877]: I1211 18:23:43.372646 4877 scope.go:117] "RemoveContainer" containerID="9046ad3100a2748ca7171d9ce666df17188450a9dd3d949bb13455728e37ed99" Dec 11 18:23:43 crc kubenswrapper[4877]: I1211 18:23:43.424563 4877 scope.go:117] "RemoveContainer" containerID="6921e39eca0aacb0cecf266043df38b33ad27236c65708211e8368e36d62f77f" Dec 11 18:23:43 crc kubenswrapper[4877]: I1211 18:23:43.467451 4877 scope.go:117] "RemoveContainer" containerID="3dfb7c565c1c40ab9f1f6b5e3960e84fe74275bf55171dc8ee1e6ee72aa3c019" Dec 11 18:23:43 crc kubenswrapper[4877]: I1211 18:23:43.499993 4877 scope.go:117] "RemoveContainer" containerID="b770f3a72015a98d6a82bed7cef9f486e977273d5998ace9ac26576c8738edf3" Dec 11 18:23:46 crc kubenswrapper[4877]: I1211 18:23:46.638148 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:23:46 crc kubenswrapper[4877]: I1211 18:23:46.638662 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:23:46 crc kubenswrapper[4877]: I1211 18:23:46.638741 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:23:46 crc kubenswrapper[4877]: I1211 18:23:46.640120 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"98fe33bbeaf8100bfd51bbde45283bda4602cbed36f4b77a0f7eb531f2dd1491"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 18:23:46 crc kubenswrapper[4877]: I1211 18:23:46.640246 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://98fe33bbeaf8100bfd51bbde45283bda4602cbed36f4b77a0f7eb531f2dd1491" gracePeriod=600 Dec 11 18:23:47 crc kubenswrapper[4877]: I1211 18:23:47.523928 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="98fe33bbeaf8100bfd51bbde45283bda4602cbed36f4b77a0f7eb531f2dd1491" exitCode=0 Dec 11 18:23:47 crc kubenswrapper[4877]: I1211 18:23:47.523985 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"98fe33bbeaf8100bfd51bbde45283bda4602cbed36f4b77a0f7eb531f2dd1491"} Dec 11 18:23:47 crc kubenswrapper[4877]: I1211 18:23:47.524884 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543"} Dec 11 18:23:47 crc kubenswrapper[4877]: I1211 18:23:47.524938 4877 scope.go:117] "RemoveContainer" containerID="77165bed566223956d79451be46ee9e0e54607425e94b061474a87842819a95a" Dec 11 18:26:00 crc kubenswrapper[4877]: I1211 18:26:00.115759 4877 generic.go:334] "Generic (PLEG): container finished" podID="53a860ae-4169-4f47-8ba7-032c96b4be3a" containerID="600e5d2b45223747bbf283f905e0245b0efc5b93a926d6c06bcac2ae8d13d24f" exitCode=1 Dec 11 18:26:00 crc kubenswrapper[4877]: I1211 18:26:00.115855 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerDied","Data":"600e5d2b45223747bbf283f905e0245b0efc5b93a926d6c06bcac2ae8d13d24f"} Dec 11 18:26:00 crc kubenswrapper[4877]: I1211 18:26:00.116842 4877 scope.go:117] "RemoveContainer" containerID="5828cc3b3843c51140b3bcb3297dd0f7ed118f8aebe644af556ed4341651d5ab" Dec 11 18:26:00 crc kubenswrapper[4877]: I1211 18:26:00.117883 4877 scope.go:117] "RemoveContainer" containerID="600e5d2b45223747bbf283f905e0245b0efc5b93a926d6c06bcac2ae8d13d24f" Dec 11 18:26:00 crc kubenswrapper[4877]: E1211 18:26:00.118290 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:26:01 crc kubenswrapper[4877]: I1211 18:26:01.137752 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:26:01 crc kubenswrapper[4877]: I1211 18:26:01.138327 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:26:01 crc kubenswrapper[4877]: I1211 18:26:01.138956 4877 scope.go:117] "RemoveContainer" containerID="600e5d2b45223747bbf283f905e0245b0efc5b93a926d6c06bcac2ae8d13d24f" Dec 11 18:26:01 crc kubenswrapper[4877]: E1211 18:26:01.139413 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:26:16 crc kubenswrapper[4877]: I1211 18:26:16.215765 4877 scope.go:117] "RemoveContainer" containerID="600e5d2b45223747bbf283f905e0245b0efc5b93a926d6c06bcac2ae8d13d24f" Dec 11 18:26:16 crc kubenswrapper[4877]: E1211 18:26:16.216661 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:26:16 crc kubenswrapper[4877]: I1211 18:26:16.637570 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:26:16 crc kubenswrapper[4877]: I1211 18:26:16.637997 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:26:30 crc kubenswrapper[4877]: I1211 18:26:30.216058 4877 scope.go:117] "RemoveContainer" containerID="600e5d2b45223747bbf283f905e0245b0efc5b93a926d6c06bcac2ae8d13d24f" Dec 11 18:26:30 crc kubenswrapper[4877]: E1211 18:26:30.217520 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:26:42 crc kubenswrapper[4877]: I1211 18:26:42.216336 4877 scope.go:117] "RemoveContainer" containerID="600e5d2b45223747bbf283f905e0245b0efc5b93a926d6c06bcac2ae8d13d24f" Dec 11 18:26:42 crc kubenswrapper[4877]: I1211 18:26:42.694063 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerStarted","Data":"39ed2e458ba003d9e4eb28d8ea9e69bfdefe36cd0300a21c857ae42a563608c7"} Dec 11 18:26:42 crc kubenswrapper[4877]: I1211 18:26:42.694481 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:26:43 crc kubenswrapper[4877]: I1211 18:26:43.667674 4877 scope.go:117] "RemoveContainer" containerID="ad27f017570361709f76d4bbeece43d30b58f7e7909e17045a13da10bdccd242" Dec 11 18:26:43 crc kubenswrapper[4877]: I1211 18:26:43.696012 4877 scope.go:117] "RemoveContainer" containerID="3fdb7f3fe03d2cf69ec426d4a1747288e5b5ba72cee225d642a11a59bdf4df1c" Dec 11 18:26:43 crc kubenswrapper[4877]: I1211 18:26:43.724168 4877 scope.go:117] "RemoveContainer" containerID="99355a74e65d8bba6d8b2f9ee0906bd3e9631d80923edb40c639de35005aba25" Dec 11 18:26:44 crc kubenswrapper[4877]: I1211 18:26:44.725583 4877 generic.go:334] "Generic (PLEG): container finished" podID="3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a" containerID="b9cb93a46f74ec4853c17dbe392485aae77089c08bff2f8e25170dd019b0cfff" exitCode=0 Dec 11 18:26:44 crc kubenswrapper[4877]: I1211 18:26:44.725641 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" event={"ID":"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a","Type":"ContainerDied","Data":"b9cb93a46f74ec4853c17dbe392485aae77089c08bff2f8e25170dd019b0cfff"} Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.221027 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.388082 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-ssh-key\") pod \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.388198 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-inventory\") pod \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.388427 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xz67\" (UniqueName: \"kubernetes.io/projected/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-kube-api-access-6xz67\") pod \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.388479 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-bootstrap-combined-ca-bundle\") pod \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\" (UID: \"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a\") " Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.397916 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-kube-api-access-6xz67" (OuterVolumeSpecName: "kube-api-access-6xz67") pod "3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a" (UID: "3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a"). InnerVolumeSpecName "kube-api-access-6xz67". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.398677 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a" (UID: "3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.420219 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-inventory" (OuterVolumeSpecName: "inventory") pod "3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a" (UID: "3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.455180 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a" (UID: "3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.492538 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.492596 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.492615 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xz67\" (UniqueName: \"kubernetes.io/projected/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-kube-api-access-6xz67\") on node \"crc\" DevicePath \"\"" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.492680 4877 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.638520 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.639104 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.760483 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" event={"ID":"3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a","Type":"ContainerDied","Data":"eb8f505619b5d390b7366c50e195e8c3fdbc8b27084d764b4e6328acd6585b85"} Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.760536 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb8f505619b5d390b7366c50e195e8c3fdbc8b27084d764b4e6328acd6585b85" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.760748 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.869606 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp"] Dec 11 18:26:46 crc kubenswrapper[4877]: E1211 18:26:46.891763 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.891819 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.899552 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.910184 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp"] Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.910513 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.913473 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.919100 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.919422 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:26:46 crc kubenswrapper[4877]: I1211 18:26:46.919676 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:26:47 crc kubenswrapper[4877]: I1211 18:26:47.017755 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2450d804-2d74-4d93-8a06-95190b0c8e94-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp\" (UID: \"2450d804-2d74-4d93-8a06-95190b0c8e94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" Dec 11 18:26:47 crc kubenswrapper[4877]: I1211 18:26:47.018206 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2450d804-2d74-4d93-8a06-95190b0c8e94-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp\" (UID: \"2450d804-2d74-4d93-8a06-95190b0c8e94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" Dec 11 18:26:47 crc kubenswrapper[4877]: I1211 18:26:47.018266 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkbhr\" (UniqueName: \"kubernetes.io/projected/2450d804-2d74-4d93-8a06-95190b0c8e94-kube-api-access-pkbhr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp\" (UID: \"2450d804-2d74-4d93-8a06-95190b0c8e94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" Dec 11 18:26:47 crc kubenswrapper[4877]: I1211 18:26:47.120278 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2450d804-2d74-4d93-8a06-95190b0c8e94-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp\" (UID: \"2450d804-2d74-4d93-8a06-95190b0c8e94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" Dec 11 18:26:47 crc kubenswrapper[4877]: I1211 18:26:47.120397 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2450d804-2d74-4d93-8a06-95190b0c8e94-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp\" (UID: \"2450d804-2d74-4d93-8a06-95190b0c8e94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" Dec 11 18:26:47 crc kubenswrapper[4877]: I1211 18:26:47.120452 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkbhr\" (UniqueName: \"kubernetes.io/projected/2450d804-2d74-4d93-8a06-95190b0c8e94-kube-api-access-pkbhr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp\" (UID: \"2450d804-2d74-4d93-8a06-95190b0c8e94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" Dec 11 18:26:47 crc kubenswrapper[4877]: I1211 18:26:47.128965 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2450d804-2d74-4d93-8a06-95190b0c8e94-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp\" (UID: \"2450d804-2d74-4d93-8a06-95190b0c8e94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" Dec 11 18:26:47 crc kubenswrapper[4877]: I1211 18:26:47.133019 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2450d804-2d74-4d93-8a06-95190b0c8e94-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp\" (UID: \"2450d804-2d74-4d93-8a06-95190b0c8e94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" Dec 11 18:26:47 crc kubenswrapper[4877]: I1211 18:26:47.141543 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkbhr\" (UniqueName: \"kubernetes.io/projected/2450d804-2d74-4d93-8a06-95190b0c8e94-kube-api-access-pkbhr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp\" (UID: \"2450d804-2d74-4d93-8a06-95190b0c8e94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" Dec 11 18:26:47 crc kubenswrapper[4877]: I1211 18:26:47.239449 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" Dec 11 18:26:47 crc kubenswrapper[4877]: I1211 18:26:47.840402 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp"] Dec 11 18:26:47 crc kubenswrapper[4877]: I1211 18:26:47.849475 4877 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 18:26:48 crc kubenswrapper[4877]: I1211 18:26:48.786536 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" event={"ID":"2450d804-2d74-4d93-8a06-95190b0c8e94","Type":"ContainerStarted","Data":"cbd9cc8f54028343d92a6cf7e9632b622403a9c5ddfdb03c2c86c3cf72daf75a"} Dec 11 18:26:48 crc kubenswrapper[4877]: I1211 18:26:48.787143 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" event={"ID":"2450d804-2d74-4d93-8a06-95190b0c8e94","Type":"ContainerStarted","Data":"c8133219047e3f100c6bd11503a5d57985c4fb298a07e5d9aeca7042bb46d0ea"} Dec 11 18:26:48 crc kubenswrapper[4877]: I1211 18:26:48.822236 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" podStartSLOduration=2.291521038 podStartE2EDuration="2.822214139s" podCreationTimestamp="2025-12-11 18:26:46 +0000 UTC" firstStartedPulling="2025-12-11 18:26:47.848883569 +0000 UTC m=+1568.875127613" lastFinishedPulling="2025-12-11 18:26:48.37957662 +0000 UTC m=+1569.405820714" observedRunningTime="2025-12-11 18:26:48.807864989 +0000 UTC m=+1569.834109033" watchObservedRunningTime="2025-12-11 18:26:48.822214139 +0000 UTC m=+1569.848458203" Dec 11 18:26:51 crc kubenswrapper[4877]: I1211 18:26:51.145845 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.053343 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-zhlnf"] Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.072475 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-zhlnf"] Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.085089 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-88d2-account-create-update-npshn"] Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.096969 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4297-account-create-update-kkkdm"] Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.106624 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6whzf"] Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.117222 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-88d2-account-create-update-npshn"] Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.126244 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3b79-account-create-update-7278w"] Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.135335 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6whzf"] Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.144187 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4297-account-create-update-kkkdm"] Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.154610 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3b79-account-create-update-7278w"] Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.229768 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00780885-fa23-4d5f-a7f2-b5fb9bb5add9" path="/var/lib/kubelet/pods/00780885-fa23-4d5f-a7f2-b5fb9bb5add9/volumes" Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.230602 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd1dca0-c7b0-4a84-893a-73db70d80919" path="/var/lib/kubelet/pods/9bd1dca0-c7b0-4a84-893a-73db70d80919/volumes" Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.231326 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d31530ff-c24d-4d0b-a197-4a1d1b638990" path="/var/lib/kubelet/pods/d31530ff-c24d-4d0b-a197-4a1d1b638990/volumes" Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.232103 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da28a6a2-f437-44c5-9fb3-60cdbe0523d9" path="/var/lib/kubelet/pods/da28a6a2-f437-44c5-9fb3-60cdbe0523d9/volumes" Dec 11 18:26:59 crc kubenswrapper[4877]: I1211 18:26:59.233846 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3617227-9338-4854-b9bd-dc2416275371" path="/var/lib/kubelet/pods/e3617227-9338-4854-b9bd-dc2416275371/volumes" Dec 11 18:27:00 crc kubenswrapper[4877]: I1211 18:27:00.031625 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-4l7l6"] Dec 11 18:27:00 crc kubenswrapper[4877]: I1211 18:27:00.043254 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-4l7l6"] Dec 11 18:27:01 crc kubenswrapper[4877]: I1211 18:27:01.266555 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e63d99b5-f6da-4796-a4a6-5b299fc7b55d" path="/var/lib/kubelet/pods/e63d99b5-f6da-4796-a4a6-5b299fc7b55d/volumes" Dec 11 18:27:16 crc kubenswrapper[4877]: I1211 18:27:16.638254 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:27:16 crc kubenswrapper[4877]: I1211 18:27:16.640493 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:27:16 crc kubenswrapper[4877]: I1211 18:27:16.640746 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:27:16 crc kubenswrapper[4877]: I1211 18:27:16.642004 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 18:27:16 crc kubenswrapper[4877]: I1211 18:27:16.642500 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" gracePeriod=600 Dec 11 18:27:16 crc kubenswrapper[4877]: E1211 18:27:16.768815 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:27:17 crc kubenswrapper[4877]: I1211 18:27:17.101860 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" exitCode=0 Dec 11 18:27:17 crc kubenswrapper[4877]: I1211 18:27:17.101918 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543"} Dec 11 18:27:17 crc kubenswrapper[4877]: I1211 18:27:17.101964 4877 scope.go:117] "RemoveContainer" containerID="98fe33bbeaf8100bfd51bbde45283bda4602cbed36f4b77a0f7eb531f2dd1491" Dec 11 18:27:17 crc kubenswrapper[4877]: I1211 18:27:17.102743 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:27:17 crc kubenswrapper[4877]: E1211 18:27:17.103188 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:27:22 crc kubenswrapper[4877]: I1211 18:27:22.076327 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-qj6wx"] Dec 11 18:27:22 crc kubenswrapper[4877]: I1211 18:27:22.092500 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-895b-account-create-update-rbv8b"] Dec 11 18:27:22 crc kubenswrapper[4877]: I1211 18:27:22.104967 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-qj6wx"] Dec 11 18:27:22 crc kubenswrapper[4877]: I1211 18:27:22.115542 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c23a-account-create-update-mrnds"] Dec 11 18:27:22 crc kubenswrapper[4877]: I1211 18:27:22.123828 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6dd3-account-create-update-xwr7s"] Dec 11 18:27:22 crc kubenswrapper[4877]: I1211 18:27:22.131801 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-tmjmw"] Dec 11 18:27:22 crc kubenswrapper[4877]: I1211 18:27:22.142407 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-tmjmw"] Dec 11 18:27:22 crc kubenswrapper[4877]: I1211 18:27:22.156445 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-895b-account-create-update-rbv8b"] Dec 11 18:27:22 crc kubenswrapper[4877]: I1211 18:27:22.166772 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c23a-account-create-update-mrnds"] Dec 11 18:27:22 crc kubenswrapper[4877]: I1211 18:27:22.177150 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6dd3-account-create-update-xwr7s"] Dec 11 18:27:23 crc kubenswrapper[4877]: I1211 18:27:23.042887 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-rlb72"] Dec 11 18:27:23 crc kubenswrapper[4877]: I1211 18:27:23.056966 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-rlb72"] Dec 11 18:27:23 crc kubenswrapper[4877]: I1211 18:27:23.234936 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="053e2796-2bef-48ed-a1c2-47917558ad1a" path="/var/lib/kubelet/pods/053e2796-2bef-48ed-a1c2-47917558ad1a/volumes" Dec 11 18:27:23 crc kubenswrapper[4877]: I1211 18:27:23.236247 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c6f25ca-ff23-47cf-99f9-eb8355c546ec" path="/var/lib/kubelet/pods/0c6f25ca-ff23-47cf-99f9-eb8355c546ec/volumes" Dec 11 18:27:23 crc kubenswrapper[4877]: I1211 18:27:23.240908 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83845faf-f287-4962-afff-966bfac050eb" path="/var/lib/kubelet/pods/83845faf-f287-4962-afff-966bfac050eb/volumes" Dec 11 18:27:23 crc kubenswrapper[4877]: I1211 18:27:23.241778 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89c691a5-8b64-4aee-8833-3453f25422ce" path="/var/lib/kubelet/pods/89c691a5-8b64-4aee-8833-3453f25422ce/volumes" Dec 11 18:27:23 crc kubenswrapper[4877]: I1211 18:27:23.243276 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aa10530-60c2-46d2-8a52-7422281745bf" path="/var/lib/kubelet/pods/8aa10530-60c2-46d2-8a52-7422281745bf/volumes" Dec 11 18:27:23 crc kubenswrapper[4877]: I1211 18:27:23.244934 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7" path="/var/lib/kubelet/pods/eadd4137-d171-45fc-b3e0-e0a1ebb0e5a7/volumes" Dec 11 18:27:30 crc kubenswrapper[4877]: I1211 18:27:30.216249 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:27:30 crc kubenswrapper[4877]: E1211 18:27:30.217085 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:27:33 crc kubenswrapper[4877]: I1211 18:27:33.047324 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6m6wj"] Dec 11 18:27:33 crc kubenswrapper[4877]: I1211 18:27:33.092037 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6m6wj"] Dec 11 18:27:33 crc kubenswrapper[4877]: I1211 18:27:33.228613 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea097e48-d917-409a-befa-14d0ba6dc67b" path="/var/lib/kubelet/pods/ea097e48-d917-409a-befa-14d0ba6dc67b/volumes" Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.066762 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-7hhdv"] Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.079279 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-7hhdv"] Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.244325 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b58d9e9-69e6-42e8-86eb-538ac26c6340" path="/var/lib/kubelet/pods/9b58d9e9-69e6-42e8-86eb-538ac26c6340/volumes" Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.246028 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c789n"] Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.249237 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.256946 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c789n"] Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.334213 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fxp9\" (UniqueName: \"kubernetes.io/projected/57f41c35-0ddd-4613-8660-bc25b0b190e3-kube-api-access-9fxp9\") pod \"redhat-marketplace-c789n\" (UID: \"57f41c35-0ddd-4613-8660-bc25b0b190e3\") " pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.334331 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f41c35-0ddd-4613-8660-bc25b0b190e3-utilities\") pod \"redhat-marketplace-c789n\" (UID: \"57f41c35-0ddd-4613-8660-bc25b0b190e3\") " pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.334377 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f41c35-0ddd-4613-8660-bc25b0b190e3-catalog-content\") pod \"redhat-marketplace-c789n\" (UID: \"57f41c35-0ddd-4613-8660-bc25b0b190e3\") " pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.436263 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fxp9\" (UniqueName: \"kubernetes.io/projected/57f41c35-0ddd-4613-8660-bc25b0b190e3-kube-api-access-9fxp9\") pod \"redhat-marketplace-c789n\" (UID: \"57f41c35-0ddd-4613-8660-bc25b0b190e3\") " pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.436377 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f41c35-0ddd-4613-8660-bc25b0b190e3-utilities\") pod \"redhat-marketplace-c789n\" (UID: \"57f41c35-0ddd-4613-8660-bc25b0b190e3\") " pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.436434 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f41c35-0ddd-4613-8660-bc25b0b190e3-catalog-content\") pod \"redhat-marketplace-c789n\" (UID: \"57f41c35-0ddd-4613-8660-bc25b0b190e3\") " pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.436986 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f41c35-0ddd-4613-8660-bc25b0b190e3-catalog-content\") pod \"redhat-marketplace-c789n\" (UID: \"57f41c35-0ddd-4613-8660-bc25b0b190e3\") " pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.437066 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f41c35-0ddd-4613-8660-bc25b0b190e3-utilities\") pod \"redhat-marketplace-c789n\" (UID: \"57f41c35-0ddd-4613-8660-bc25b0b190e3\") " pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.461933 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fxp9\" (UniqueName: \"kubernetes.io/projected/57f41c35-0ddd-4613-8660-bc25b0b190e3-kube-api-access-9fxp9\") pod \"redhat-marketplace-c789n\" (UID: \"57f41c35-0ddd-4613-8660-bc25b0b190e3\") " pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:35 crc kubenswrapper[4877]: I1211 18:27:35.594301 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:36 crc kubenswrapper[4877]: I1211 18:27:36.088259 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c789n"] Dec 11 18:27:36 crc kubenswrapper[4877]: I1211 18:27:36.322243 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c789n" event={"ID":"57f41c35-0ddd-4613-8660-bc25b0b190e3","Type":"ContainerStarted","Data":"b3bc132fe41cad489813d0f8a34ffc73f0f1053f555883320924263d0ece6f9d"} Dec 11 18:27:36 crc kubenswrapper[4877]: I1211 18:27:36.322311 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c789n" event={"ID":"57f41c35-0ddd-4613-8660-bc25b0b190e3","Type":"ContainerStarted","Data":"fe3906e3d407adb26d1a23063c408851cef80140101a9835c7019502e2242dd8"} Dec 11 18:27:37 crc kubenswrapper[4877]: I1211 18:27:37.335731 4877 generic.go:334] "Generic (PLEG): container finished" podID="57f41c35-0ddd-4613-8660-bc25b0b190e3" containerID="b3bc132fe41cad489813d0f8a34ffc73f0f1053f555883320924263d0ece6f9d" exitCode=0 Dec 11 18:27:37 crc kubenswrapper[4877]: I1211 18:27:37.335834 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c789n" event={"ID":"57f41c35-0ddd-4613-8660-bc25b0b190e3","Type":"ContainerDied","Data":"b3bc132fe41cad489813d0f8a34ffc73f0f1053f555883320924263d0ece6f9d"} Dec 11 18:27:37 crc kubenswrapper[4877]: I1211 18:27:37.336209 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c789n" event={"ID":"57f41c35-0ddd-4613-8660-bc25b0b190e3","Type":"ContainerStarted","Data":"90cf2f5f101a554ddf9714df8a8b4267b4cb5b3815713e3fb124557ca0fb59f2"} Dec 11 18:27:38 crc kubenswrapper[4877]: I1211 18:27:38.349194 4877 generic.go:334] "Generic (PLEG): container finished" podID="57f41c35-0ddd-4613-8660-bc25b0b190e3" containerID="90cf2f5f101a554ddf9714df8a8b4267b4cb5b3815713e3fb124557ca0fb59f2" exitCode=0 Dec 11 18:27:38 crc kubenswrapper[4877]: I1211 18:27:38.349296 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c789n" event={"ID":"57f41c35-0ddd-4613-8660-bc25b0b190e3","Type":"ContainerDied","Data":"90cf2f5f101a554ddf9714df8a8b4267b4cb5b3815713e3fb124557ca0fb59f2"} Dec 11 18:27:39 crc kubenswrapper[4877]: I1211 18:27:39.379120 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c789n" event={"ID":"57f41c35-0ddd-4613-8660-bc25b0b190e3","Type":"ContainerStarted","Data":"08dbf36841e809bd4f2a614b3f74530a37812a3fd185ea2f8b0a1fb754ba4cf4"} Dec 11 18:27:39 crc kubenswrapper[4877]: I1211 18:27:39.425504 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c789n" podStartSLOduration=1.8398282240000001 podStartE2EDuration="4.425483706s" podCreationTimestamp="2025-12-11 18:27:35 +0000 UTC" firstStartedPulling="2025-12-11 18:27:36.324615345 +0000 UTC m=+1617.350859389" lastFinishedPulling="2025-12-11 18:27:38.910270827 +0000 UTC m=+1619.936514871" observedRunningTime="2025-12-11 18:27:39.39873199 +0000 UTC m=+1620.424976034" watchObservedRunningTime="2025-12-11 18:27:39.425483706 +0000 UTC m=+1620.451727750" Dec 11 18:27:43 crc kubenswrapper[4877]: I1211 18:27:43.973098 4877 scope.go:117] "RemoveContainer" containerID="81aa9b87184d399502c75da40beaa9e7ad75594780d8593f10ec43b1591f8de8" Dec 11 18:27:44 crc kubenswrapper[4877]: I1211 18:27:44.021660 4877 scope.go:117] "RemoveContainer" containerID="eb2b930d88003b2b206cca782b71ce27c67dae8e0d903bebffc95bbac1e76348" Dec 11 18:27:44 crc kubenswrapper[4877]: I1211 18:27:44.089529 4877 scope.go:117] "RemoveContainer" containerID="c152a4ad35508cda15d01261a25ab99d9aa1b55b4244c57d921680c2a6adde95" Dec 11 18:27:44 crc kubenswrapper[4877]: I1211 18:27:44.137797 4877 scope.go:117] "RemoveContainer" containerID="54ad9c8b5261b9257a44e023a4b5313664aa7d61131e4111d7bae8aea5d964da" Dec 11 18:27:44 crc kubenswrapper[4877]: I1211 18:27:44.186567 4877 scope.go:117] "RemoveContainer" containerID="6073bbf4b9e14711e2b0d582290b6a061c2ba7f6f569a41270f3ca82ac241837" Dec 11 18:27:44 crc kubenswrapper[4877]: I1211 18:27:44.216896 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:27:44 crc kubenswrapper[4877]: E1211 18:27:44.217242 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:27:44 crc kubenswrapper[4877]: I1211 18:27:44.231037 4877 scope.go:117] "RemoveContainer" containerID="54275e14fec872dbf7c5e1b7d14a49937f3c11dc32fb3d8a02dd16c24ca9cf3b" Dec 11 18:27:44 crc kubenswrapper[4877]: I1211 18:27:44.290883 4877 scope.go:117] "RemoveContainer" containerID="9cc51c4edb729c10f60ac6e729203762d3b929ce0cf9e75a9290889697e2bc8e" Dec 11 18:27:44 crc kubenswrapper[4877]: I1211 18:27:44.312144 4877 scope.go:117] "RemoveContainer" containerID="4b764e2622b2f237a20ae7d1465cf185b8d6427ccf5672b753062307a9c67f46" Dec 11 18:27:44 crc kubenswrapper[4877]: I1211 18:27:44.337674 4877 scope.go:117] "RemoveContainer" containerID="464d9445c4c9aa30f57ee8ab4a1d59cbbb232ce5ad854cfe930f7384e96bc83d" Dec 11 18:27:44 crc kubenswrapper[4877]: I1211 18:27:44.384538 4877 scope.go:117] "RemoveContainer" containerID="d8c12f76524d085466f0a341aa8bc01d54554c588f5ddf3b3abc5e2e51a2b1e6" Dec 11 18:27:44 crc kubenswrapper[4877]: I1211 18:27:44.484898 4877 scope.go:117] "RemoveContainer" containerID="390c072bf6e41669bf8fd85475f140d4c93f52c3d40264349af5a87e2239ba9e" Dec 11 18:27:44 crc kubenswrapper[4877]: I1211 18:27:44.512215 4877 scope.go:117] "RemoveContainer" containerID="625d7b120449ed4f96310fee4fe84f0381f969b7acbf967997e7904697b947aa" Dec 11 18:27:44 crc kubenswrapper[4877]: I1211 18:27:44.541896 4877 scope.go:117] "RemoveContainer" containerID="ff25c374ca4c92b1925fc2de135eecfc0271a98afaa2c0fef9039ed0f69b2a97" Dec 11 18:27:44 crc kubenswrapper[4877]: I1211 18:27:44.575557 4877 scope.go:117] "RemoveContainer" containerID="b19a3510152bb7b7814140b6795acbb0b94a4bbb6bf38d3747952d6e904b024d" Dec 11 18:27:45 crc kubenswrapper[4877]: I1211 18:27:45.595081 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:45 crc kubenswrapper[4877]: I1211 18:27:45.595135 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:45 crc kubenswrapper[4877]: I1211 18:27:45.665864 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:46 crc kubenswrapper[4877]: I1211 18:27:46.562304 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:46 crc kubenswrapper[4877]: I1211 18:27:46.632433 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c789n"] Dec 11 18:27:48 crc kubenswrapper[4877]: I1211 18:27:48.523829 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c789n" podUID="57f41c35-0ddd-4613-8660-bc25b0b190e3" containerName="registry-server" containerID="cri-o://08dbf36841e809bd4f2a614b3f74530a37812a3fd185ea2f8b0a1fb754ba4cf4" gracePeriod=2 Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.503668 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.541876 4877 generic.go:334] "Generic (PLEG): container finished" podID="57f41c35-0ddd-4613-8660-bc25b0b190e3" containerID="08dbf36841e809bd4f2a614b3f74530a37812a3fd185ea2f8b0a1fb754ba4cf4" exitCode=0 Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.541940 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c789n" event={"ID":"57f41c35-0ddd-4613-8660-bc25b0b190e3","Type":"ContainerDied","Data":"08dbf36841e809bd4f2a614b3f74530a37812a3fd185ea2f8b0a1fb754ba4cf4"} Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.541955 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c789n" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.541978 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c789n" event={"ID":"57f41c35-0ddd-4613-8660-bc25b0b190e3","Type":"ContainerDied","Data":"fe3906e3d407adb26d1a23063c408851cef80140101a9835c7019502e2242dd8"} Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.542002 4877 scope.go:117] "RemoveContainer" containerID="08dbf36841e809bd4f2a614b3f74530a37812a3fd185ea2f8b0a1fb754ba4cf4" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.563695 4877 scope.go:117] "RemoveContainer" containerID="90cf2f5f101a554ddf9714df8a8b4267b4cb5b3815713e3fb124557ca0fb59f2" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.588748 4877 scope.go:117] "RemoveContainer" containerID="b3bc132fe41cad489813d0f8a34ffc73f0f1053f555883320924263d0ece6f9d" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.600723 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f41c35-0ddd-4613-8660-bc25b0b190e3-catalog-content\") pod \"57f41c35-0ddd-4613-8660-bc25b0b190e3\" (UID: \"57f41c35-0ddd-4613-8660-bc25b0b190e3\") " Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.600900 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f41c35-0ddd-4613-8660-bc25b0b190e3-utilities\") pod \"57f41c35-0ddd-4613-8660-bc25b0b190e3\" (UID: \"57f41c35-0ddd-4613-8660-bc25b0b190e3\") " Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.601007 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fxp9\" (UniqueName: \"kubernetes.io/projected/57f41c35-0ddd-4613-8660-bc25b0b190e3-kube-api-access-9fxp9\") pod \"57f41c35-0ddd-4613-8660-bc25b0b190e3\" (UID: \"57f41c35-0ddd-4613-8660-bc25b0b190e3\") " Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.602475 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f41c35-0ddd-4613-8660-bc25b0b190e3-utilities" (OuterVolumeSpecName: "utilities") pod "57f41c35-0ddd-4613-8660-bc25b0b190e3" (UID: "57f41c35-0ddd-4613-8660-bc25b0b190e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.608199 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f41c35-0ddd-4613-8660-bc25b0b190e3-kube-api-access-9fxp9" (OuterVolumeSpecName: "kube-api-access-9fxp9") pod "57f41c35-0ddd-4613-8660-bc25b0b190e3" (UID: "57f41c35-0ddd-4613-8660-bc25b0b190e3"). InnerVolumeSpecName "kube-api-access-9fxp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.634935 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f41c35-0ddd-4613-8660-bc25b0b190e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57f41c35-0ddd-4613-8660-bc25b0b190e3" (UID: "57f41c35-0ddd-4613-8660-bc25b0b190e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.682006 4877 scope.go:117] "RemoveContainer" containerID="08dbf36841e809bd4f2a614b3f74530a37812a3fd185ea2f8b0a1fb754ba4cf4" Dec 11 18:27:49 crc kubenswrapper[4877]: E1211 18:27:49.682700 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08dbf36841e809bd4f2a614b3f74530a37812a3fd185ea2f8b0a1fb754ba4cf4\": container with ID starting with 08dbf36841e809bd4f2a614b3f74530a37812a3fd185ea2f8b0a1fb754ba4cf4 not found: ID does not exist" containerID="08dbf36841e809bd4f2a614b3f74530a37812a3fd185ea2f8b0a1fb754ba4cf4" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.682776 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08dbf36841e809bd4f2a614b3f74530a37812a3fd185ea2f8b0a1fb754ba4cf4"} err="failed to get container status \"08dbf36841e809bd4f2a614b3f74530a37812a3fd185ea2f8b0a1fb754ba4cf4\": rpc error: code = NotFound desc = could not find container \"08dbf36841e809bd4f2a614b3f74530a37812a3fd185ea2f8b0a1fb754ba4cf4\": container with ID starting with 08dbf36841e809bd4f2a614b3f74530a37812a3fd185ea2f8b0a1fb754ba4cf4 not found: ID does not exist" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.682837 4877 scope.go:117] "RemoveContainer" containerID="90cf2f5f101a554ddf9714df8a8b4267b4cb5b3815713e3fb124557ca0fb59f2" Dec 11 18:27:49 crc kubenswrapper[4877]: E1211 18:27:49.683269 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90cf2f5f101a554ddf9714df8a8b4267b4cb5b3815713e3fb124557ca0fb59f2\": container with ID starting with 90cf2f5f101a554ddf9714df8a8b4267b4cb5b3815713e3fb124557ca0fb59f2 not found: ID does not exist" containerID="90cf2f5f101a554ddf9714df8a8b4267b4cb5b3815713e3fb124557ca0fb59f2" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.683305 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90cf2f5f101a554ddf9714df8a8b4267b4cb5b3815713e3fb124557ca0fb59f2"} err="failed to get container status \"90cf2f5f101a554ddf9714df8a8b4267b4cb5b3815713e3fb124557ca0fb59f2\": rpc error: code = NotFound desc = could not find container \"90cf2f5f101a554ddf9714df8a8b4267b4cb5b3815713e3fb124557ca0fb59f2\": container with ID starting with 90cf2f5f101a554ddf9714df8a8b4267b4cb5b3815713e3fb124557ca0fb59f2 not found: ID does not exist" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.683327 4877 scope.go:117] "RemoveContainer" containerID="b3bc132fe41cad489813d0f8a34ffc73f0f1053f555883320924263d0ece6f9d" Dec 11 18:27:49 crc kubenswrapper[4877]: E1211 18:27:49.683674 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3bc132fe41cad489813d0f8a34ffc73f0f1053f555883320924263d0ece6f9d\": container with ID starting with b3bc132fe41cad489813d0f8a34ffc73f0f1053f555883320924263d0ece6f9d not found: ID does not exist" containerID="b3bc132fe41cad489813d0f8a34ffc73f0f1053f555883320924263d0ece6f9d" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.683707 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3bc132fe41cad489813d0f8a34ffc73f0f1053f555883320924263d0ece6f9d"} err="failed to get container status \"b3bc132fe41cad489813d0f8a34ffc73f0f1053f555883320924263d0ece6f9d\": rpc error: code = NotFound desc = could not find container \"b3bc132fe41cad489813d0f8a34ffc73f0f1053f555883320924263d0ece6f9d\": container with ID starting with b3bc132fe41cad489813d0f8a34ffc73f0f1053f555883320924263d0ece6f9d not found: ID does not exist" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.704564 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fxp9\" (UniqueName: \"kubernetes.io/projected/57f41c35-0ddd-4613-8660-bc25b0b190e3-kube-api-access-9fxp9\") on node \"crc\" DevicePath \"\"" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.704632 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57f41c35-0ddd-4613-8660-bc25b0b190e3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.704649 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57f41c35-0ddd-4613-8660-bc25b0b190e3-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.900494 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c789n"] Dec 11 18:27:49 crc kubenswrapper[4877]: I1211 18:27:49.912506 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c789n"] Dec 11 18:27:51 crc kubenswrapper[4877]: I1211 18:27:51.230353 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f41c35-0ddd-4613-8660-bc25b0b190e3" path="/var/lib/kubelet/pods/57f41c35-0ddd-4613-8660-bc25b0b190e3/volumes" Dec 11 18:27:59 crc kubenswrapper[4877]: I1211 18:27:59.229427 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:27:59 crc kubenswrapper[4877]: E1211 18:27:59.230914 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:28:07 crc kubenswrapper[4877]: I1211 18:28:07.057805 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-x4kfs"] Dec 11 18:28:07 crc kubenswrapper[4877]: I1211 18:28:07.067921 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-x4kfs"] Dec 11 18:28:07 crc kubenswrapper[4877]: I1211 18:28:07.231572 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8de39620-4351-442e-afb7-b53270fffe41" path="/var/lib/kubelet/pods/8de39620-4351-442e-afb7-b53270fffe41/volumes" Dec 11 18:28:10 crc kubenswrapper[4877]: I1211 18:28:10.217018 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:28:10 crc kubenswrapper[4877]: E1211 18:28:10.218635 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:28:18 crc kubenswrapper[4877]: I1211 18:28:18.044720 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-bcdfz"] Dec 11 18:28:18 crc kubenswrapper[4877]: I1211 18:28:18.056435 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-bcdfz"] Dec 11 18:28:19 crc kubenswrapper[4877]: I1211 18:28:19.230650 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73110039-1660-4b03-9f07-2469ea7fe039" path="/var/lib/kubelet/pods/73110039-1660-4b03-9f07-2469ea7fe039/volumes" Dec 11 18:28:25 crc kubenswrapper[4877]: I1211 18:28:25.216178 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:28:25 crc kubenswrapper[4877]: E1211 18:28:25.217595 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:28:25 crc kubenswrapper[4877]: I1211 18:28:25.989992 4877 generic.go:334] "Generic (PLEG): container finished" podID="2450d804-2d74-4d93-8a06-95190b0c8e94" containerID="cbd9cc8f54028343d92a6cf7e9632b622403a9c5ddfdb03c2c86c3cf72daf75a" exitCode=0 Dec 11 18:28:25 crc kubenswrapper[4877]: I1211 18:28:25.990123 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" event={"ID":"2450d804-2d74-4d93-8a06-95190b0c8e94","Type":"ContainerDied","Data":"cbd9cc8f54028343d92a6cf7e9632b622403a9c5ddfdb03c2c86c3cf72daf75a"} Dec 11 18:28:27 crc kubenswrapper[4877]: I1211 18:28:27.064533 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-c7595"] Dec 11 18:28:27 crc kubenswrapper[4877]: I1211 18:28:27.075106 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-c7595"] Dec 11 18:28:27 crc kubenswrapper[4877]: I1211 18:28:27.234538 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d642ce6b-7f43-402d-9658-c824289a232c" path="/var/lib/kubelet/pods/d642ce6b-7f43-402d-9658-c824289a232c/volumes" Dec 11 18:28:27 crc kubenswrapper[4877]: I1211 18:28:27.505621 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" Dec 11 18:28:27 crc kubenswrapper[4877]: I1211 18:28:27.536347 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2450d804-2d74-4d93-8a06-95190b0c8e94-ssh-key\") pod \"2450d804-2d74-4d93-8a06-95190b0c8e94\" (UID: \"2450d804-2d74-4d93-8a06-95190b0c8e94\") " Dec 11 18:28:27 crc kubenswrapper[4877]: I1211 18:28:27.536431 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkbhr\" (UniqueName: \"kubernetes.io/projected/2450d804-2d74-4d93-8a06-95190b0c8e94-kube-api-access-pkbhr\") pod \"2450d804-2d74-4d93-8a06-95190b0c8e94\" (UID: \"2450d804-2d74-4d93-8a06-95190b0c8e94\") " Dec 11 18:28:27 crc kubenswrapper[4877]: I1211 18:28:27.536585 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2450d804-2d74-4d93-8a06-95190b0c8e94-inventory\") pod \"2450d804-2d74-4d93-8a06-95190b0c8e94\" (UID: \"2450d804-2d74-4d93-8a06-95190b0c8e94\") " Dec 11 18:28:27 crc kubenswrapper[4877]: I1211 18:28:27.551168 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2450d804-2d74-4d93-8a06-95190b0c8e94-kube-api-access-pkbhr" (OuterVolumeSpecName: "kube-api-access-pkbhr") pod "2450d804-2d74-4d93-8a06-95190b0c8e94" (UID: "2450d804-2d74-4d93-8a06-95190b0c8e94"). InnerVolumeSpecName "kube-api-access-pkbhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:28:27 crc kubenswrapper[4877]: I1211 18:28:27.570571 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2450d804-2d74-4d93-8a06-95190b0c8e94-inventory" (OuterVolumeSpecName: "inventory") pod "2450d804-2d74-4d93-8a06-95190b0c8e94" (UID: "2450d804-2d74-4d93-8a06-95190b0c8e94"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:28:27 crc kubenswrapper[4877]: I1211 18:28:27.576606 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2450d804-2d74-4d93-8a06-95190b0c8e94-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2450d804-2d74-4d93-8a06-95190b0c8e94" (UID: "2450d804-2d74-4d93-8a06-95190b0c8e94"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:28:27 crc kubenswrapper[4877]: I1211 18:28:27.638985 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2450d804-2d74-4d93-8a06-95190b0c8e94-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:28:27 crc kubenswrapper[4877]: I1211 18:28:27.639151 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkbhr\" (UniqueName: \"kubernetes.io/projected/2450d804-2d74-4d93-8a06-95190b0c8e94-kube-api-access-pkbhr\") on node \"crc\" DevicePath \"\"" Dec 11 18:28:27 crc kubenswrapper[4877]: I1211 18:28:27.639178 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2450d804-2d74-4d93-8a06-95190b0c8e94-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.056263 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" event={"ID":"2450d804-2d74-4d93-8a06-95190b0c8e94","Type":"ContainerDied","Data":"c8133219047e3f100c6bd11503a5d57985c4fb298a07e5d9aeca7042bb46d0ea"} Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.056330 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8133219047e3f100c6bd11503a5d57985c4fb298a07e5d9aeca7042bb46d0ea" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.056429 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.129847 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7"] Dec 11 18:28:28 crc kubenswrapper[4877]: E1211 18:28:28.130740 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f41c35-0ddd-4613-8660-bc25b0b190e3" containerName="registry-server" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.130760 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f41c35-0ddd-4613-8660-bc25b0b190e3" containerName="registry-server" Dec 11 18:28:28 crc kubenswrapper[4877]: E1211 18:28:28.130790 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f41c35-0ddd-4613-8660-bc25b0b190e3" containerName="extract-utilities" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.130802 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f41c35-0ddd-4613-8660-bc25b0b190e3" containerName="extract-utilities" Dec 11 18:28:28 crc kubenswrapper[4877]: E1211 18:28:28.130846 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2450d804-2d74-4d93-8a06-95190b0c8e94" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.130857 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="2450d804-2d74-4d93-8a06-95190b0c8e94" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 11 18:28:28 crc kubenswrapper[4877]: E1211 18:28:28.130877 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f41c35-0ddd-4613-8660-bc25b0b190e3" containerName="extract-content" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.130887 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f41c35-0ddd-4613-8660-bc25b0b190e3" containerName="extract-content" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.131129 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="2450d804-2d74-4d93-8a06-95190b0c8e94" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.131150 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f41c35-0ddd-4613-8660-bc25b0b190e3" containerName="registry-server" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.132123 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.135820 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.136071 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.136234 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.136402 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.141778 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7"] Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.253862 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba5e024b-8ec8-4214-bca2-9dbf57f69623-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4twd7\" (UID: \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.254184 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvqb7\" (UniqueName: \"kubernetes.io/projected/ba5e024b-8ec8-4214-bca2-9dbf57f69623-kube-api-access-rvqb7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4twd7\" (UID: \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.254757 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba5e024b-8ec8-4214-bca2-9dbf57f69623-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4twd7\" (UID: \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.357781 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba5e024b-8ec8-4214-bca2-9dbf57f69623-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4twd7\" (UID: \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.357954 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba5e024b-8ec8-4214-bca2-9dbf57f69623-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4twd7\" (UID: \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.358081 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvqb7\" (UniqueName: \"kubernetes.io/projected/ba5e024b-8ec8-4214-bca2-9dbf57f69623-kube-api-access-rvqb7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4twd7\" (UID: \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.365236 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba5e024b-8ec8-4214-bca2-9dbf57f69623-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4twd7\" (UID: \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.367282 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba5e024b-8ec8-4214-bca2-9dbf57f69623-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4twd7\" (UID: \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.377674 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvqb7\" (UniqueName: \"kubernetes.io/projected/ba5e024b-8ec8-4214-bca2-9dbf57f69623-kube-api-access-rvqb7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4twd7\" (UID: \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" Dec 11 18:28:28 crc kubenswrapper[4877]: I1211 18:28:28.452039 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" Dec 11 18:28:29 crc kubenswrapper[4877]: I1211 18:28:29.053941 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7"] Dec 11 18:28:29 crc kubenswrapper[4877]: I1211 18:28:29.427403 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cmhls"] Dec 11 18:28:29 crc kubenswrapper[4877]: I1211 18:28:29.429891 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:29 crc kubenswrapper[4877]: I1211 18:28:29.456599 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmhls"] Dec 11 18:28:29 crc kubenswrapper[4877]: I1211 18:28:29.588998 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d955971-e524-45da-93d8-b270b72cb309-utilities\") pod \"redhat-operators-cmhls\" (UID: \"6d955971-e524-45da-93d8-b270b72cb309\") " pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:29 crc kubenswrapper[4877]: I1211 18:28:29.589463 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzczs\" (UniqueName: \"kubernetes.io/projected/6d955971-e524-45da-93d8-b270b72cb309-kube-api-access-bzczs\") pod \"redhat-operators-cmhls\" (UID: \"6d955971-e524-45da-93d8-b270b72cb309\") " pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:29 crc kubenswrapper[4877]: I1211 18:28:29.589632 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d955971-e524-45da-93d8-b270b72cb309-catalog-content\") pod \"redhat-operators-cmhls\" (UID: \"6d955971-e524-45da-93d8-b270b72cb309\") " pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:29 crc kubenswrapper[4877]: I1211 18:28:29.692441 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzczs\" (UniqueName: \"kubernetes.io/projected/6d955971-e524-45da-93d8-b270b72cb309-kube-api-access-bzczs\") pod \"redhat-operators-cmhls\" (UID: \"6d955971-e524-45da-93d8-b270b72cb309\") " pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:29 crc kubenswrapper[4877]: I1211 18:28:29.692782 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d955971-e524-45da-93d8-b270b72cb309-catalog-content\") pod \"redhat-operators-cmhls\" (UID: \"6d955971-e524-45da-93d8-b270b72cb309\") " pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:29 crc kubenswrapper[4877]: I1211 18:28:29.692988 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d955971-e524-45da-93d8-b270b72cb309-utilities\") pod \"redhat-operators-cmhls\" (UID: \"6d955971-e524-45da-93d8-b270b72cb309\") " pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:29 crc kubenswrapper[4877]: I1211 18:28:29.693457 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d955971-e524-45da-93d8-b270b72cb309-catalog-content\") pod \"redhat-operators-cmhls\" (UID: \"6d955971-e524-45da-93d8-b270b72cb309\") " pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:29 crc kubenswrapper[4877]: I1211 18:28:29.693765 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d955971-e524-45da-93d8-b270b72cb309-utilities\") pod \"redhat-operators-cmhls\" (UID: \"6d955971-e524-45da-93d8-b270b72cb309\") " pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:29 crc kubenswrapper[4877]: I1211 18:28:29.716327 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzczs\" (UniqueName: \"kubernetes.io/projected/6d955971-e524-45da-93d8-b270b72cb309-kube-api-access-bzczs\") pod \"redhat-operators-cmhls\" (UID: \"6d955971-e524-45da-93d8-b270b72cb309\") " pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:29 crc kubenswrapper[4877]: I1211 18:28:29.755795 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:30 crc kubenswrapper[4877]: I1211 18:28:30.076523 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" event={"ID":"ba5e024b-8ec8-4214-bca2-9dbf57f69623","Type":"ContainerStarted","Data":"98d425b1bfa45bc72968858c35ddfba03875ce917df3e3f4c45244e12dfbd1b2"} Dec 11 18:28:30 crc kubenswrapper[4877]: I1211 18:28:30.077046 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" event={"ID":"ba5e024b-8ec8-4214-bca2-9dbf57f69623","Type":"ContainerStarted","Data":"88b7bda39302a02a8cd7f4b18655a0a6610454510b017b7b77ca8f7df56f971b"} Dec 11 18:28:30 crc kubenswrapper[4877]: I1211 18:28:30.116020 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" podStartSLOduration=1.6317068049999999 podStartE2EDuration="2.115984504s" podCreationTimestamp="2025-12-11 18:28:28 +0000 UTC" firstStartedPulling="2025-12-11 18:28:29.061021348 +0000 UTC m=+1670.087265402" lastFinishedPulling="2025-12-11 18:28:29.545299057 +0000 UTC m=+1670.571543101" observedRunningTime="2025-12-11 18:28:30.099960049 +0000 UTC m=+1671.126204123" watchObservedRunningTime="2025-12-11 18:28:30.115984504 +0000 UTC m=+1671.142228578" Dec 11 18:28:30 crc kubenswrapper[4877]: I1211 18:28:30.983068 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmhls"] Dec 11 18:28:31 crc kubenswrapper[4877]: I1211 18:28:31.086597 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmhls" event={"ID":"6d955971-e524-45da-93d8-b270b72cb309","Type":"ContainerStarted","Data":"7d35a027ab02f31b1b099d78fa3ed1d4886c479827d09db740358d3065a0e13c"} Dec 11 18:28:31 crc kubenswrapper[4877]: E1211 18:28:31.425702 4877 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d955971_e524_45da_93d8_b270b72cb309.slice/crio-29b79d60e980a62af4f6cff06d548577ac7d9907b7745d4222e7a8b6385c0f65.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d955971_e524_45da_93d8_b270b72cb309.slice/crio-conmon-29b79d60e980a62af4f6cff06d548577ac7d9907b7745d4222e7a8b6385c0f65.scope\": RecentStats: unable to find data in memory cache]" Dec 11 18:28:32 crc kubenswrapper[4877]: I1211 18:28:32.102757 4877 generic.go:334] "Generic (PLEG): container finished" podID="6d955971-e524-45da-93d8-b270b72cb309" containerID="29b79d60e980a62af4f6cff06d548577ac7d9907b7745d4222e7a8b6385c0f65" exitCode=0 Dec 11 18:28:32 crc kubenswrapper[4877]: I1211 18:28:32.102837 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmhls" event={"ID":"6d955971-e524-45da-93d8-b270b72cb309","Type":"ContainerDied","Data":"29b79d60e980a62af4f6cff06d548577ac7d9907b7745d4222e7a8b6385c0f65"} Dec 11 18:28:33 crc kubenswrapper[4877]: I1211 18:28:33.124322 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmhls" event={"ID":"6d955971-e524-45da-93d8-b270b72cb309","Type":"ContainerStarted","Data":"7761b19e36505bebaca9ac0cf491a1e7e74070aced6e7a9251efc650cfb0e8e2"} Dec 11 18:28:34 crc kubenswrapper[4877]: I1211 18:28:34.405532 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5hq6c"] Dec 11 18:28:34 crc kubenswrapper[4877]: I1211 18:28:34.408148 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:34 crc kubenswrapper[4877]: I1211 18:28:34.424798 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5hq6c"] Dec 11 18:28:34 crc kubenswrapper[4877]: I1211 18:28:34.507331 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf68l\" (UniqueName: \"kubernetes.io/projected/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-kube-api-access-nf68l\") pod \"community-operators-5hq6c\" (UID: \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\") " pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:34 crc kubenswrapper[4877]: I1211 18:28:34.507639 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-utilities\") pod \"community-operators-5hq6c\" (UID: \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\") " pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:34 crc kubenswrapper[4877]: I1211 18:28:34.507684 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-catalog-content\") pod \"community-operators-5hq6c\" (UID: \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\") " pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:34 crc kubenswrapper[4877]: I1211 18:28:34.610167 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-utilities\") pod \"community-operators-5hq6c\" (UID: \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\") " pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:34 crc kubenswrapper[4877]: I1211 18:28:34.610227 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-catalog-content\") pod \"community-operators-5hq6c\" (UID: \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\") " pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:34 crc kubenswrapper[4877]: I1211 18:28:34.610262 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf68l\" (UniqueName: \"kubernetes.io/projected/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-kube-api-access-nf68l\") pod \"community-operators-5hq6c\" (UID: \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\") " pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:34 crc kubenswrapper[4877]: I1211 18:28:34.610892 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-utilities\") pod \"community-operators-5hq6c\" (UID: \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\") " pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:34 crc kubenswrapper[4877]: I1211 18:28:34.611194 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-catalog-content\") pod \"community-operators-5hq6c\" (UID: \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\") " pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:34 crc kubenswrapper[4877]: I1211 18:28:34.636631 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf68l\" (UniqueName: \"kubernetes.io/projected/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-kube-api-access-nf68l\") pod \"community-operators-5hq6c\" (UID: \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\") " pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:34 crc kubenswrapper[4877]: I1211 18:28:34.755896 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:35 crc kubenswrapper[4877]: I1211 18:28:35.062470 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-wgnnt"] Dec 11 18:28:35 crc kubenswrapper[4877]: I1211 18:28:35.073452 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-wgnnt"] Dec 11 18:28:35 crc kubenswrapper[4877]: I1211 18:28:35.228281 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee9614f-acc1-4883-989e-6348978f4641" path="/var/lib/kubelet/pods/fee9614f-acc1-4883-989e-6348978f4641/volumes" Dec 11 18:28:35 crc kubenswrapper[4877]: I1211 18:28:35.357558 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5hq6c"] Dec 11 18:28:35 crc kubenswrapper[4877]: W1211 18:28:35.749616 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod911092ab_0cb6_41ed_9a92_e0a280ec9e2d.slice/crio-218d822a53346d97ac8fd5b95813c455298a251824a3106fb3f1bc4653b8ecb6 WatchSource:0}: Error finding container 218d822a53346d97ac8fd5b95813c455298a251824a3106fb3f1bc4653b8ecb6: Status 404 returned error can't find the container with id 218d822a53346d97ac8fd5b95813c455298a251824a3106fb3f1bc4653b8ecb6 Dec 11 18:28:36 crc kubenswrapper[4877]: I1211 18:28:36.167174 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hq6c" event={"ID":"911092ab-0cb6-41ed-9a92-e0a280ec9e2d","Type":"ContainerStarted","Data":"218d822a53346d97ac8fd5b95813c455298a251824a3106fb3f1bc4653b8ecb6"} Dec 11 18:28:36 crc kubenswrapper[4877]: I1211 18:28:36.173170 4877 generic.go:334] "Generic (PLEG): container finished" podID="6d955971-e524-45da-93d8-b270b72cb309" containerID="7761b19e36505bebaca9ac0cf491a1e7e74070aced6e7a9251efc650cfb0e8e2" exitCode=0 Dec 11 18:28:36 crc kubenswrapper[4877]: I1211 18:28:36.173242 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmhls" event={"ID":"6d955971-e524-45da-93d8-b270b72cb309","Type":"ContainerDied","Data":"7761b19e36505bebaca9ac0cf491a1e7e74070aced6e7a9251efc650cfb0e8e2"} Dec 11 18:28:37 crc kubenswrapper[4877]: I1211 18:28:37.042274 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-vtwqc"] Dec 11 18:28:37 crc kubenswrapper[4877]: I1211 18:28:37.051613 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-vtwqc"] Dec 11 18:28:37 crc kubenswrapper[4877]: I1211 18:28:37.191978 4877 generic.go:334] "Generic (PLEG): container finished" podID="911092ab-0cb6-41ed-9a92-e0a280ec9e2d" containerID="1305051af072845fd02005cb44ec0396e8ebcaf30b9d4bee635e29b439be6ab0" exitCode=0 Dec 11 18:28:37 crc kubenswrapper[4877]: I1211 18:28:37.192067 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hq6c" event={"ID":"911092ab-0cb6-41ed-9a92-e0a280ec9e2d","Type":"ContainerDied","Data":"1305051af072845fd02005cb44ec0396e8ebcaf30b9d4bee635e29b439be6ab0"} Dec 11 18:28:37 crc kubenswrapper[4877]: I1211 18:28:37.233874 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cc9dafb-2cd8-4a57-b7f2-941c39748675" path="/var/lib/kubelet/pods/2cc9dafb-2cd8-4a57-b7f2-941c39748675/volumes" Dec 11 18:28:38 crc kubenswrapper[4877]: I1211 18:28:38.208482 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmhls" event={"ID":"6d955971-e524-45da-93d8-b270b72cb309","Type":"ContainerStarted","Data":"e0de6449181a06ae35ee1cbf481665f251e84347ad016d766b878a06f77d70d5"} Dec 11 18:28:38 crc kubenswrapper[4877]: I1211 18:28:38.232975 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cmhls" podStartSLOduration=4.258779094 podStartE2EDuration="9.232952792s" podCreationTimestamp="2025-12-11 18:28:29 +0000 UTC" firstStartedPulling="2025-12-11 18:28:32.107900883 +0000 UTC m=+1673.134144937" lastFinishedPulling="2025-12-11 18:28:37.082074571 +0000 UTC m=+1678.108318635" observedRunningTime="2025-12-11 18:28:38.227576246 +0000 UTC m=+1679.253820290" watchObservedRunningTime="2025-12-11 18:28:38.232952792 +0000 UTC m=+1679.259196836" Dec 11 18:28:39 crc kubenswrapper[4877]: I1211 18:28:39.228328 4877 generic.go:334] "Generic (PLEG): container finished" podID="911092ab-0cb6-41ed-9a92-e0a280ec9e2d" containerID="36d8b99308ab624a3bee89d556229390ff3284e707074d6d6a061ed256431424" exitCode=0 Dec 11 18:28:39 crc kubenswrapper[4877]: I1211 18:28:39.230485 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hq6c" event={"ID":"911092ab-0cb6-41ed-9a92-e0a280ec9e2d","Type":"ContainerDied","Data":"36d8b99308ab624a3bee89d556229390ff3284e707074d6d6a061ed256431424"} Dec 11 18:28:39 crc kubenswrapper[4877]: I1211 18:28:39.230857 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:28:39 crc kubenswrapper[4877]: E1211 18:28:39.231819 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:28:39 crc kubenswrapper[4877]: I1211 18:28:39.758485 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:39 crc kubenswrapper[4877]: I1211 18:28:39.759213 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:40 crc kubenswrapper[4877]: I1211 18:28:40.818875 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cmhls" podUID="6d955971-e524-45da-93d8-b270b72cb309" containerName="registry-server" probeResult="failure" output=< Dec 11 18:28:40 crc kubenswrapper[4877]: timeout: failed to connect service ":50051" within 1s Dec 11 18:28:40 crc kubenswrapper[4877]: > Dec 11 18:28:41 crc kubenswrapper[4877]: I1211 18:28:41.265306 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hq6c" event={"ID":"911092ab-0cb6-41ed-9a92-e0a280ec9e2d","Type":"ContainerStarted","Data":"f973e3fade18ce7ff1e61a3109279fe229c31a1f6330dd3a622fbc5caf367ac9"} Dec 11 18:28:41 crc kubenswrapper[4877]: I1211 18:28:41.300276 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5hq6c" podStartSLOduration=4.4551133929999995 podStartE2EDuration="7.300250711s" podCreationTimestamp="2025-12-11 18:28:34 +0000 UTC" firstStartedPulling="2025-12-11 18:28:37.198018269 +0000 UTC m=+1678.224262313" lastFinishedPulling="2025-12-11 18:28:40.043155577 +0000 UTC m=+1681.069399631" observedRunningTime="2025-12-11 18:28:41.293846877 +0000 UTC m=+1682.320090921" watchObservedRunningTime="2025-12-11 18:28:41.300250711 +0000 UTC m=+1682.326494755" Dec 11 18:28:44 crc kubenswrapper[4877]: I1211 18:28:44.756777 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:44 crc kubenswrapper[4877]: I1211 18:28:44.757338 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:44 crc kubenswrapper[4877]: I1211 18:28:44.861299 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:44 crc kubenswrapper[4877]: I1211 18:28:44.887360 4877 scope.go:117] "RemoveContainer" containerID="2932460f61b2adc9e522b23130551a011742d2ecaa94d84e6cccdbd01c7bcf75" Dec 11 18:28:44 crc kubenswrapper[4877]: I1211 18:28:44.935087 4877 scope.go:117] "RemoveContainer" containerID="161e5fc70c65ed69d826316c94e9a492f621cdcd40f85194f164b2de2916017d" Dec 11 18:28:44 crc kubenswrapper[4877]: I1211 18:28:44.988224 4877 scope.go:117] "RemoveContainer" containerID="fdee2ff244b2f630c75b41972cb4f2cd012275be3c830ccfbf42d963a1a5ad56" Dec 11 18:28:45 crc kubenswrapper[4877]: I1211 18:28:45.019746 4877 scope.go:117] "RemoveContainer" containerID="8070fd8f8c546372b9b65836b0182e287538b17f94b2a4c59a036fbf5bad787f" Dec 11 18:28:45 crc kubenswrapper[4877]: I1211 18:28:45.069555 4877 scope.go:117] "RemoveContainer" containerID="aedb7b60063c44706ef8b6497108a13ee0dca8ce89858270b71540f5254d21d7" Dec 11 18:28:45 crc kubenswrapper[4877]: I1211 18:28:45.350548 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:45 crc kubenswrapper[4877]: I1211 18:28:45.401428 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5hq6c"] Dec 11 18:28:47 crc kubenswrapper[4877]: I1211 18:28:47.324998 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5hq6c" podUID="911092ab-0cb6-41ed-9a92-e0a280ec9e2d" containerName="registry-server" containerID="cri-o://f973e3fade18ce7ff1e61a3109279fe229c31a1f6330dd3a622fbc5caf367ac9" gracePeriod=2 Dec 11 18:28:47 crc kubenswrapper[4877]: I1211 18:28:47.820305 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.001614 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-catalog-content\") pod \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\" (UID: \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\") " Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.001735 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf68l\" (UniqueName: \"kubernetes.io/projected/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-kube-api-access-nf68l\") pod \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\" (UID: \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\") " Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.001807 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-utilities\") pod \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\" (UID: \"911092ab-0cb6-41ed-9a92-e0a280ec9e2d\") " Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.003050 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-utilities" (OuterVolumeSpecName: "utilities") pod "911092ab-0cb6-41ed-9a92-e0a280ec9e2d" (UID: "911092ab-0cb6-41ed-9a92-e0a280ec9e2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.008744 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-kube-api-access-nf68l" (OuterVolumeSpecName: "kube-api-access-nf68l") pod "911092ab-0cb6-41ed-9a92-e0a280ec9e2d" (UID: "911092ab-0cb6-41ed-9a92-e0a280ec9e2d"). InnerVolumeSpecName "kube-api-access-nf68l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.077693 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "911092ab-0cb6-41ed-9a92-e0a280ec9e2d" (UID: "911092ab-0cb6-41ed-9a92-e0a280ec9e2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.104805 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.104881 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf68l\" (UniqueName: \"kubernetes.io/projected/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-kube-api-access-nf68l\") on node \"crc\" DevicePath \"\"" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.104910 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911092ab-0cb6-41ed-9a92-e0a280ec9e2d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.343870 4877 generic.go:334] "Generic (PLEG): container finished" podID="911092ab-0cb6-41ed-9a92-e0a280ec9e2d" containerID="f973e3fade18ce7ff1e61a3109279fe229c31a1f6330dd3a622fbc5caf367ac9" exitCode=0 Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.343926 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hq6c" event={"ID":"911092ab-0cb6-41ed-9a92-e0a280ec9e2d","Type":"ContainerDied","Data":"f973e3fade18ce7ff1e61a3109279fe229c31a1f6330dd3a622fbc5caf367ac9"} Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.343966 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hq6c" event={"ID":"911092ab-0cb6-41ed-9a92-e0a280ec9e2d","Type":"ContainerDied","Data":"218d822a53346d97ac8fd5b95813c455298a251824a3106fb3f1bc4653b8ecb6"} Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.343988 4877 scope.go:117] "RemoveContainer" containerID="f973e3fade18ce7ff1e61a3109279fe229c31a1f6330dd3a622fbc5caf367ac9" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.344164 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5hq6c" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.379897 4877 scope.go:117] "RemoveContainer" containerID="36d8b99308ab624a3bee89d556229390ff3284e707074d6d6a061ed256431424" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.411888 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5hq6c"] Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.422619 4877 scope.go:117] "RemoveContainer" containerID="1305051af072845fd02005cb44ec0396e8ebcaf30b9d4bee635e29b439be6ab0" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.423392 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5hq6c"] Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.481565 4877 scope.go:117] "RemoveContainer" containerID="f973e3fade18ce7ff1e61a3109279fe229c31a1f6330dd3a622fbc5caf367ac9" Dec 11 18:28:48 crc kubenswrapper[4877]: E1211 18:28:48.482788 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f973e3fade18ce7ff1e61a3109279fe229c31a1f6330dd3a622fbc5caf367ac9\": container with ID starting with f973e3fade18ce7ff1e61a3109279fe229c31a1f6330dd3a622fbc5caf367ac9 not found: ID does not exist" containerID="f973e3fade18ce7ff1e61a3109279fe229c31a1f6330dd3a622fbc5caf367ac9" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.482834 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f973e3fade18ce7ff1e61a3109279fe229c31a1f6330dd3a622fbc5caf367ac9"} err="failed to get container status \"f973e3fade18ce7ff1e61a3109279fe229c31a1f6330dd3a622fbc5caf367ac9\": rpc error: code = NotFound desc = could not find container \"f973e3fade18ce7ff1e61a3109279fe229c31a1f6330dd3a622fbc5caf367ac9\": container with ID starting with f973e3fade18ce7ff1e61a3109279fe229c31a1f6330dd3a622fbc5caf367ac9 not found: ID does not exist" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.482858 4877 scope.go:117] "RemoveContainer" containerID="36d8b99308ab624a3bee89d556229390ff3284e707074d6d6a061ed256431424" Dec 11 18:28:48 crc kubenswrapper[4877]: E1211 18:28:48.483473 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36d8b99308ab624a3bee89d556229390ff3284e707074d6d6a061ed256431424\": container with ID starting with 36d8b99308ab624a3bee89d556229390ff3284e707074d6d6a061ed256431424 not found: ID does not exist" containerID="36d8b99308ab624a3bee89d556229390ff3284e707074d6d6a061ed256431424" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.483513 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36d8b99308ab624a3bee89d556229390ff3284e707074d6d6a061ed256431424"} err="failed to get container status \"36d8b99308ab624a3bee89d556229390ff3284e707074d6d6a061ed256431424\": rpc error: code = NotFound desc = could not find container \"36d8b99308ab624a3bee89d556229390ff3284e707074d6d6a061ed256431424\": container with ID starting with 36d8b99308ab624a3bee89d556229390ff3284e707074d6d6a061ed256431424 not found: ID does not exist" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.483537 4877 scope.go:117] "RemoveContainer" containerID="1305051af072845fd02005cb44ec0396e8ebcaf30b9d4bee635e29b439be6ab0" Dec 11 18:28:48 crc kubenswrapper[4877]: E1211 18:28:48.484966 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1305051af072845fd02005cb44ec0396e8ebcaf30b9d4bee635e29b439be6ab0\": container with ID starting with 1305051af072845fd02005cb44ec0396e8ebcaf30b9d4bee635e29b439be6ab0 not found: ID does not exist" containerID="1305051af072845fd02005cb44ec0396e8ebcaf30b9d4bee635e29b439be6ab0" Dec 11 18:28:48 crc kubenswrapper[4877]: I1211 18:28:48.484998 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1305051af072845fd02005cb44ec0396e8ebcaf30b9d4bee635e29b439be6ab0"} err="failed to get container status \"1305051af072845fd02005cb44ec0396e8ebcaf30b9d4bee635e29b439be6ab0\": rpc error: code = NotFound desc = could not find container \"1305051af072845fd02005cb44ec0396e8ebcaf30b9d4bee635e29b439be6ab0\": container with ID starting with 1305051af072845fd02005cb44ec0396e8ebcaf30b9d4bee635e29b439be6ab0 not found: ID does not exist" Dec 11 18:28:49 crc kubenswrapper[4877]: I1211 18:28:49.234210 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="911092ab-0cb6-41ed-9a92-e0a280ec9e2d" path="/var/lib/kubelet/pods/911092ab-0cb6-41ed-9a92-e0a280ec9e2d/volumes" Dec 11 18:28:49 crc kubenswrapper[4877]: I1211 18:28:49.836791 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:49 crc kubenswrapper[4877]: I1211 18:28:49.933916 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:50 crc kubenswrapper[4877]: I1211 18:28:50.509063 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cmhls"] Dec 11 18:28:51 crc kubenswrapper[4877]: I1211 18:28:51.402971 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cmhls" podUID="6d955971-e524-45da-93d8-b270b72cb309" containerName="registry-server" containerID="cri-o://e0de6449181a06ae35ee1cbf481665f251e84347ad016d766b878a06f77d70d5" gracePeriod=2 Dec 11 18:28:51 crc kubenswrapper[4877]: I1211 18:28:51.887916 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:51 crc kubenswrapper[4877]: I1211 18:28:51.908441 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d955971-e524-45da-93d8-b270b72cb309-utilities\") pod \"6d955971-e524-45da-93d8-b270b72cb309\" (UID: \"6d955971-e524-45da-93d8-b270b72cb309\") " Dec 11 18:28:51 crc kubenswrapper[4877]: I1211 18:28:51.908832 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d955971-e524-45da-93d8-b270b72cb309-catalog-content\") pod \"6d955971-e524-45da-93d8-b270b72cb309\" (UID: \"6d955971-e524-45da-93d8-b270b72cb309\") " Dec 11 18:28:51 crc kubenswrapper[4877]: I1211 18:28:51.908945 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzczs\" (UniqueName: \"kubernetes.io/projected/6d955971-e524-45da-93d8-b270b72cb309-kube-api-access-bzczs\") pod \"6d955971-e524-45da-93d8-b270b72cb309\" (UID: \"6d955971-e524-45da-93d8-b270b72cb309\") " Dec 11 18:28:51 crc kubenswrapper[4877]: I1211 18:28:51.909951 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d955971-e524-45da-93d8-b270b72cb309-utilities" (OuterVolumeSpecName: "utilities") pod "6d955971-e524-45da-93d8-b270b72cb309" (UID: "6d955971-e524-45da-93d8-b270b72cb309"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:28:51 crc kubenswrapper[4877]: I1211 18:28:51.955758 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d955971-e524-45da-93d8-b270b72cb309-kube-api-access-bzczs" (OuterVolumeSpecName: "kube-api-access-bzczs") pod "6d955971-e524-45da-93d8-b270b72cb309" (UID: "6d955971-e524-45da-93d8-b270b72cb309"). InnerVolumeSpecName "kube-api-access-bzczs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.011194 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d955971-e524-45da-93d8-b270b72cb309-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.011219 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzczs\" (UniqueName: \"kubernetes.io/projected/6d955971-e524-45da-93d8-b270b72cb309-kube-api-access-bzczs\") on node \"crc\" DevicePath \"\"" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.037552 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d955971-e524-45da-93d8-b270b72cb309-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d955971-e524-45da-93d8-b270b72cb309" (UID: "6d955971-e524-45da-93d8-b270b72cb309"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.114554 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d955971-e524-45da-93d8-b270b72cb309-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.216890 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:28:52 crc kubenswrapper[4877]: E1211 18:28:52.217293 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.419834 4877 generic.go:334] "Generic (PLEG): container finished" podID="6d955971-e524-45da-93d8-b270b72cb309" containerID="e0de6449181a06ae35ee1cbf481665f251e84347ad016d766b878a06f77d70d5" exitCode=0 Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.419892 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmhls" event={"ID":"6d955971-e524-45da-93d8-b270b72cb309","Type":"ContainerDied","Data":"e0de6449181a06ae35ee1cbf481665f251e84347ad016d766b878a06f77d70d5"} Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.419935 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmhls" event={"ID":"6d955971-e524-45da-93d8-b270b72cb309","Type":"ContainerDied","Data":"7d35a027ab02f31b1b099d78fa3ed1d4886c479827d09db740358d3065a0e13c"} Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.419966 4877 scope.go:117] "RemoveContainer" containerID="e0de6449181a06ae35ee1cbf481665f251e84347ad016d766b878a06f77d70d5" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.419959 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmhls" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.454731 4877 scope.go:117] "RemoveContainer" containerID="7761b19e36505bebaca9ac0cf491a1e7e74070aced6e7a9251efc650cfb0e8e2" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.476703 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cmhls"] Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.488277 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cmhls"] Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.502680 4877 scope.go:117] "RemoveContainer" containerID="29b79d60e980a62af4f6cff06d548577ac7d9907b7745d4222e7a8b6385c0f65" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.564261 4877 scope.go:117] "RemoveContainer" containerID="e0de6449181a06ae35ee1cbf481665f251e84347ad016d766b878a06f77d70d5" Dec 11 18:28:52 crc kubenswrapper[4877]: E1211 18:28:52.566035 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0de6449181a06ae35ee1cbf481665f251e84347ad016d766b878a06f77d70d5\": container with ID starting with e0de6449181a06ae35ee1cbf481665f251e84347ad016d766b878a06f77d70d5 not found: ID does not exist" containerID="e0de6449181a06ae35ee1cbf481665f251e84347ad016d766b878a06f77d70d5" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.566090 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0de6449181a06ae35ee1cbf481665f251e84347ad016d766b878a06f77d70d5"} err="failed to get container status \"e0de6449181a06ae35ee1cbf481665f251e84347ad016d766b878a06f77d70d5\": rpc error: code = NotFound desc = could not find container \"e0de6449181a06ae35ee1cbf481665f251e84347ad016d766b878a06f77d70d5\": container with ID starting with e0de6449181a06ae35ee1cbf481665f251e84347ad016d766b878a06f77d70d5 not found: ID does not exist" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.566129 4877 scope.go:117] "RemoveContainer" containerID="7761b19e36505bebaca9ac0cf491a1e7e74070aced6e7a9251efc650cfb0e8e2" Dec 11 18:28:52 crc kubenswrapper[4877]: E1211 18:28:52.567201 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7761b19e36505bebaca9ac0cf491a1e7e74070aced6e7a9251efc650cfb0e8e2\": container with ID starting with 7761b19e36505bebaca9ac0cf491a1e7e74070aced6e7a9251efc650cfb0e8e2 not found: ID does not exist" containerID="7761b19e36505bebaca9ac0cf491a1e7e74070aced6e7a9251efc650cfb0e8e2" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.567242 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7761b19e36505bebaca9ac0cf491a1e7e74070aced6e7a9251efc650cfb0e8e2"} err="failed to get container status \"7761b19e36505bebaca9ac0cf491a1e7e74070aced6e7a9251efc650cfb0e8e2\": rpc error: code = NotFound desc = could not find container \"7761b19e36505bebaca9ac0cf491a1e7e74070aced6e7a9251efc650cfb0e8e2\": container with ID starting with 7761b19e36505bebaca9ac0cf491a1e7e74070aced6e7a9251efc650cfb0e8e2 not found: ID does not exist" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.567265 4877 scope.go:117] "RemoveContainer" containerID="29b79d60e980a62af4f6cff06d548577ac7d9907b7745d4222e7a8b6385c0f65" Dec 11 18:28:52 crc kubenswrapper[4877]: E1211 18:28:52.567623 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b79d60e980a62af4f6cff06d548577ac7d9907b7745d4222e7a8b6385c0f65\": container with ID starting with 29b79d60e980a62af4f6cff06d548577ac7d9907b7745d4222e7a8b6385c0f65 not found: ID does not exist" containerID="29b79d60e980a62af4f6cff06d548577ac7d9907b7745d4222e7a8b6385c0f65" Dec 11 18:28:52 crc kubenswrapper[4877]: I1211 18:28:52.567653 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b79d60e980a62af4f6cff06d548577ac7d9907b7745d4222e7a8b6385c0f65"} err="failed to get container status \"29b79d60e980a62af4f6cff06d548577ac7d9907b7745d4222e7a8b6385c0f65\": rpc error: code = NotFound desc = could not find container \"29b79d60e980a62af4f6cff06d548577ac7d9907b7745d4222e7a8b6385c0f65\": container with ID starting with 29b79d60e980a62af4f6cff06d548577ac7d9907b7745d4222e7a8b6385c0f65 not found: ID does not exist" Dec 11 18:28:53 crc kubenswrapper[4877]: I1211 18:28:53.236000 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d955971-e524-45da-93d8-b270b72cb309" path="/var/lib/kubelet/pods/6d955971-e524-45da-93d8-b270b72cb309/volumes" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.215487 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:29:05 crc kubenswrapper[4877]: E1211 18:29:05.216911 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.288886 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dw24p"] Dec 11 18:29:05 crc kubenswrapper[4877]: E1211 18:29:05.289529 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d955971-e524-45da-93d8-b270b72cb309" containerName="extract-utilities" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.289551 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d955971-e524-45da-93d8-b270b72cb309" containerName="extract-utilities" Dec 11 18:29:05 crc kubenswrapper[4877]: E1211 18:29:05.289578 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911092ab-0cb6-41ed-9a92-e0a280ec9e2d" containerName="extract-utilities" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.289587 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="911092ab-0cb6-41ed-9a92-e0a280ec9e2d" containerName="extract-utilities" Dec 11 18:29:05 crc kubenswrapper[4877]: E1211 18:29:05.289629 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911092ab-0cb6-41ed-9a92-e0a280ec9e2d" containerName="registry-server" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.289639 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="911092ab-0cb6-41ed-9a92-e0a280ec9e2d" containerName="registry-server" Dec 11 18:29:05 crc kubenswrapper[4877]: E1211 18:29:05.289658 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d955971-e524-45da-93d8-b270b72cb309" containerName="registry-server" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.289667 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d955971-e524-45da-93d8-b270b72cb309" containerName="registry-server" Dec 11 18:29:05 crc kubenswrapper[4877]: E1211 18:29:05.289683 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d955971-e524-45da-93d8-b270b72cb309" containerName="extract-content" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.289693 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d955971-e524-45da-93d8-b270b72cb309" containerName="extract-content" Dec 11 18:29:05 crc kubenswrapper[4877]: E1211 18:29:05.289714 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911092ab-0cb6-41ed-9a92-e0a280ec9e2d" containerName="extract-content" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.289722 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="911092ab-0cb6-41ed-9a92-e0a280ec9e2d" containerName="extract-content" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.289963 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d955971-e524-45da-93d8-b270b72cb309" containerName="registry-server" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.289982 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="911092ab-0cb6-41ed-9a92-e0a280ec9e2d" containerName="registry-server" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.292036 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.300427 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-catalog-content\") pod \"certified-operators-dw24p\" (UID: \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\") " pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.300531 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwwtr\" (UniqueName: \"kubernetes.io/projected/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-kube-api-access-gwwtr\") pod \"certified-operators-dw24p\" (UID: \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\") " pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.300561 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-utilities\") pod \"certified-operators-dw24p\" (UID: \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\") " pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.333460 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dw24p"] Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.402345 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-utilities\") pod \"certified-operators-dw24p\" (UID: \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\") " pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.402585 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-catalog-content\") pod \"certified-operators-dw24p\" (UID: \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\") " pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.402698 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwwtr\" (UniqueName: \"kubernetes.io/projected/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-kube-api-access-gwwtr\") pod \"certified-operators-dw24p\" (UID: \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\") " pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.403098 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-catalog-content\") pod \"certified-operators-dw24p\" (UID: \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\") " pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.403121 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-utilities\") pod \"certified-operators-dw24p\" (UID: \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\") " pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.439187 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwwtr\" (UniqueName: \"kubernetes.io/projected/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-kube-api-access-gwwtr\") pod \"certified-operators-dw24p\" (UID: \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\") " pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:05 crc kubenswrapper[4877]: I1211 18:29:05.619927 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:06 crc kubenswrapper[4877]: I1211 18:29:06.151822 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dw24p"] Dec 11 18:29:06 crc kubenswrapper[4877]: I1211 18:29:06.601559 4877 generic.go:334] "Generic (PLEG): container finished" podID="9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" containerID="51670d78193dc54e79011b50c8b260de93eda926b6cd15df1d7f9ce81e421042" exitCode=0 Dec 11 18:29:06 crc kubenswrapper[4877]: I1211 18:29:06.601647 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw24p" event={"ID":"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0","Type":"ContainerDied","Data":"51670d78193dc54e79011b50c8b260de93eda926b6cd15df1d7f9ce81e421042"} Dec 11 18:29:06 crc kubenswrapper[4877]: I1211 18:29:06.601880 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw24p" event={"ID":"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0","Type":"ContainerStarted","Data":"53439941a150e3cf65f45bc9b83904061306e03385128d28ddc3cc361caffcc7"} Dec 11 18:29:08 crc kubenswrapper[4877]: I1211 18:29:08.631969 4877 generic.go:334] "Generic (PLEG): container finished" podID="9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" containerID="b8fb6819e7537f6a9f6414469a5175815eeb6fda3576e989577e3945078f2b8f" exitCode=0 Dec 11 18:29:08 crc kubenswrapper[4877]: I1211 18:29:08.632092 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw24p" event={"ID":"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0","Type":"ContainerDied","Data":"b8fb6819e7537f6a9f6414469a5175815eeb6fda3576e989577e3945078f2b8f"} Dec 11 18:29:09 crc kubenswrapper[4877]: I1211 18:29:09.646663 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw24p" event={"ID":"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0","Type":"ContainerStarted","Data":"27b342fe503e24b9b958a0a5de0ece80ec3e2a9073d5edf52b799514dafa151f"} Dec 11 18:29:09 crc kubenswrapper[4877]: I1211 18:29:09.678691 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dw24p" podStartSLOduration=2.100461769 podStartE2EDuration="4.67866622s" podCreationTimestamp="2025-12-11 18:29:05 +0000 UTC" firstStartedPulling="2025-12-11 18:29:06.603947208 +0000 UTC m=+1707.630191262" lastFinishedPulling="2025-12-11 18:29:09.182151679 +0000 UTC m=+1710.208395713" observedRunningTime="2025-12-11 18:29:09.666355519 +0000 UTC m=+1710.692599593" watchObservedRunningTime="2025-12-11 18:29:09.67866622 +0000 UTC m=+1710.704910274" Dec 11 18:29:13 crc kubenswrapper[4877]: I1211 18:29:13.730528 4877 generic.go:334] "Generic (PLEG): container finished" podID="53a860ae-4169-4f47-8ba7-032c96b4be3a" containerID="39ed2e458ba003d9e4eb28d8ea9e69bfdefe36cd0300a21c857ae42a563608c7" exitCode=1 Dec 11 18:29:13 crc kubenswrapper[4877]: I1211 18:29:13.730643 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerDied","Data":"39ed2e458ba003d9e4eb28d8ea9e69bfdefe36cd0300a21c857ae42a563608c7"} Dec 11 18:29:13 crc kubenswrapper[4877]: I1211 18:29:13.731494 4877 scope.go:117] "RemoveContainer" containerID="600e5d2b45223747bbf283f905e0245b0efc5b93a926d6c06bcac2ae8d13d24f" Dec 11 18:29:13 crc kubenswrapper[4877]: I1211 18:29:13.732680 4877 scope.go:117] "RemoveContainer" containerID="39ed2e458ba003d9e4eb28d8ea9e69bfdefe36cd0300a21c857ae42a563608c7" Dec 11 18:29:13 crc kubenswrapper[4877]: E1211 18:29:13.733131 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:29:15 crc kubenswrapper[4877]: I1211 18:29:15.620033 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:15 crc kubenswrapper[4877]: I1211 18:29:15.620774 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:15 crc kubenswrapper[4877]: I1211 18:29:15.714195 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:15 crc kubenswrapper[4877]: I1211 18:29:15.843957 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:15 crc kubenswrapper[4877]: I1211 18:29:15.968736 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dw24p"] Dec 11 18:29:17 crc kubenswrapper[4877]: I1211 18:29:17.778053 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dw24p" podUID="9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" containerName="registry-server" containerID="cri-o://27b342fe503e24b9b958a0a5de0ece80ec3e2a9073d5edf52b799514dafa151f" gracePeriod=2 Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.216125 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:29:18 crc kubenswrapper[4877]: E1211 18:29:18.217045 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.230733 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.403262 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwwtr\" (UniqueName: \"kubernetes.io/projected/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-kube-api-access-gwwtr\") pod \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\" (UID: \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\") " Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.403432 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-utilities\") pod \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\" (UID: \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\") " Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.403452 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-catalog-content\") pod \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\" (UID: \"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0\") " Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.404729 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-utilities" (OuterVolumeSpecName: "utilities") pod "9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" (UID: "9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.411549 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-kube-api-access-gwwtr" (OuterVolumeSpecName: "kube-api-access-gwwtr") pod "9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" (UID: "9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0"). InnerVolumeSpecName "kube-api-access-gwwtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.506140 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwwtr\" (UniqueName: \"kubernetes.io/projected/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-kube-api-access-gwwtr\") on node \"crc\" DevicePath \"\"" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.506177 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.520218 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" (UID: "9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.608841 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.790586 4877 generic.go:334] "Generic (PLEG): container finished" podID="9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" containerID="27b342fe503e24b9b958a0a5de0ece80ec3e2a9073d5edf52b799514dafa151f" exitCode=0 Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.790635 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw24p" event={"ID":"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0","Type":"ContainerDied","Data":"27b342fe503e24b9b958a0a5de0ece80ec3e2a9073d5edf52b799514dafa151f"} Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.790673 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw24p" event={"ID":"9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0","Type":"ContainerDied","Data":"53439941a150e3cf65f45bc9b83904061306e03385128d28ddc3cc361caffcc7"} Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.790696 4877 scope.go:117] "RemoveContainer" containerID="27b342fe503e24b9b958a0a5de0ece80ec3e2a9073d5edf52b799514dafa151f" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.790710 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dw24p" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.834881 4877 scope.go:117] "RemoveContainer" containerID="b8fb6819e7537f6a9f6414469a5175815eeb6fda3576e989577e3945078f2b8f" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.879334 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dw24p"] Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.890642 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dw24p"] Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.890681 4877 scope.go:117] "RemoveContainer" containerID="51670d78193dc54e79011b50c8b260de93eda926b6cd15df1d7f9ce81e421042" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.924969 4877 scope.go:117] "RemoveContainer" containerID="27b342fe503e24b9b958a0a5de0ece80ec3e2a9073d5edf52b799514dafa151f" Dec 11 18:29:18 crc kubenswrapper[4877]: E1211 18:29:18.925660 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27b342fe503e24b9b958a0a5de0ece80ec3e2a9073d5edf52b799514dafa151f\": container with ID starting with 27b342fe503e24b9b958a0a5de0ece80ec3e2a9073d5edf52b799514dafa151f not found: ID does not exist" containerID="27b342fe503e24b9b958a0a5de0ece80ec3e2a9073d5edf52b799514dafa151f" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.925725 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27b342fe503e24b9b958a0a5de0ece80ec3e2a9073d5edf52b799514dafa151f"} err="failed to get container status \"27b342fe503e24b9b958a0a5de0ece80ec3e2a9073d5edf52b799514dafa151f\": rpc error: code = NotFound desc = could not find container \"27b342fe503e24b9b958a0a5de0ece80ec3e2a9073d5edf52b799514dafa151f\": container with ID starting with 27b342fe503e24b9b958a0a5de0ece80ec3e2a9073d5edf52b799514dafa151f not found: ID does not exist" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.925762 4877 scope.go:117] "RemoveContainer" containerID="b8fb6819e7537f6a9f6414469a5175815eeb6fda3576e989577e3945078f2b8f" Dec 11 18:29:18 crc kubenswrapper[4877]: E1211 18:29:18.926231 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8fb6819e7537f6a9f6414469a5175815eeb6fda3576e989577e3945078f2b8f\": container with ID starting with b8fb6819e7537f6a9f6414469a5175815eeb6fda3576e989577e3945078f2b8f not found: ID does not exist" containerID="b8fb6819e7537f6a9f6414469a5175815eeb6fda3576e989577e3945078f2b8f" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.926280 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8fb6819e7537f6a9f6414469a5175815eeb6fda3576e989577e3945078f2b8f"} err="failed to get container status \"b8fb6819e7537f6a9f6414469a5175815eeb6fda3576e989577e3945078f2b8f\": rpc error: code = NotFound desc = could not find container \"b8fb6819e7537f6a9f6414469a5175815eeb6fda3576e989577e3945078f2b8f\": container with ID starting with b8fb6819e7537f6a9f6414469a5175815eeb6fda3576e989577e3945078f2b8f not found: ID does not exist" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.926317 4877 scope.go:117] "RemoveContainer" containerID="51670d78193dc54e79011b50c8b260de93eda926b6cd15df1d7f9ce81e421042" Dec 11 18:29:18 crc kubenswrapper[4877]: E1211 18:29:18.926819 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51670d78193dc54e79011b50c8b260de93eda926b6cd15df1d7f9ce81e421042\": container with ID starting with 51670d78193dc54e79011b50c8b260de93eda926b6cd15df1d7f9ce81e421042 not found: ID does not exist" containerID="51670d78193dc54e79011b50c8b260de93eda926b6cd15df1d7f9ce81e421042" Dec 11 18:29:18 crc kubenswrapper[4877]: I1211 18:29:18.926858 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51670d78193dc54e79011b50c8b260de93eda926b6cd15df1d7f9ce81e421042"} err="failed to get container status \"51670d78193dc54e79011b50c8b260de93eda926b6cd15df1d7f9ce81e421042\": rpc error: code = NotFound desc = could not find container \"51670d78193dc54e79011b50c8b260de93eda926b6cd15df1d7f9ce81e421042\": container with ID starting with 51670d78193dc54e79011b50c8b260de93eda926b6cd15df1d7f9ce81e421042 not found: ID does not exist" Dec 11 18:29:19 crc kubenswrapper[4877]: I1211 18:29:19.230901 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" path="/var/lib/kubelet/pods/9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0/volumes" Dec 11 18:29:21 crc kubenswrapper[4877]: I1211 18:29:21.137736 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:29:21 crc kubenswrapper[4877]: I1211 18:29:21.137796 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:29:21 crc kubenswrapper[4877]: I1211 18:29:21.138650 4877 scope.go:117] "RemoveContainer" containerID="39ed2e458ba003d9e4eb28d8ea9e69bfdefe36cd0300a21c857ae42a563608c7" Dec 11 18:29:21 crc kubenswrapper[4877]: E1211 18:29:21.139001 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:29:22 crc kubenswrapper[4877]: I1211 18:29:22.058405 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-s6qkp"] Dec 11 18:29:22 crc kubenswrapper[4877]: I1211 18:29:22.068995 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-s6qkp"] Dec 11 18:29:23 crc kubenswrapper[4877]: I1211 18:29:23.037598 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-ncpkj"] Dec 11 18:29:23 crc kubenswrapper[4877]: I1211 18:29:23.054360 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-ncpkj"] Dec 11 18:29:23 crc kubenswrapper[4877]: I1211 18:29:23.227514 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0501ada-a122-49bd-a65b-52ff7ee6fe00" path="/var/lib/kubelet/pods/f0501ada-a122-49bd-a65b-52ff7ee6fe00/volumes" Dec 11 18:29:23 crc kubenswrapper[4877]: I1211 18:29:23.228773 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fefe31f3-c374-42c0-9af1-a7e2d095bc6d" path="/var/lib/kubelet/pods/fefe31f3-c374-42c0-9af1-a7e2d095bc6d/volumes" Dec 11 18:29:24 crc kubenswrapper[4877]: I1211 18:29:24.053099 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-a10b-account-create-update-csxdx"] Dec 11 18:29:24 crc kubenswrapper[4877]: I1211 18:29:24.069610 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2a65-account-create-update-vf2wg"] Dec 11 18:29:24 crc kubenswrapper[4877]: I1211 18:29:24.081682 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-a10b-account-create-update-csxdx"] Dec 11 18:29:24 crc kubenswrapper[4877]: I1211 18:29:24.090774 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-44gb8"] Dec 11 18:29:24 crc kubenswrapper[4877]: I1211 18:29:24.097009 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3f44-account-create-update-r8mvs"] Dec 11 18:29:24 crc kubenswrapper[4877]: I1211 18:29:24.103478 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2a65-account-create-update-vf2wg"] Dec 11 18:29:24 crc kubenswrapper[4877]: I1211 18:29:24.109928 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-44gb8"] Dec 11 18:29:24 crc kubenswrapper[4877]: I1211 18:29:24.115374 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3f44-account-create-update-r8mvs"] Dec 11 18:29:25 crc kubenswrapper[4877]: I1211 18:29:25.231317 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b57b46-038c-402a-ab72-36f2870a32fd" path="/var/lib/kubelet/pods/18b57b46-038c-402a-ab72-36f2870a32fd/volumes" Dec 11 18:29:25 crc kubenswrapper[4877]: I1211 18:29:25.232568 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a34fa8b-c5b0-4297-bbc6-609bf82854f7" path="/var/lib/kubelet/pods/2a34fa8b-c5b0-4297-bbc6-609bf82854f7/volumes" Dec 11 18:29:25 crc kubenswrapper[4877]: I1211 18:29:25.233769 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce805e83-9fac-4e8d-a823-33210302631d" path="/var/lib/kubelet/pods/ce805e83-9fac-4e8d-a823-33210302631d/volumes" Dec 11 18:29:25 crc kubenswrapper[4877]: I1211 18:29:25.236443 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb8b32a-6068-41da-bac0-f13c2a25e815" path="/var/lib/kubelet/pods/feb8b32a-6068-41da-bac0-f13c2a25e815/volumes" Dec 11 18:29:29 crc kubenswrapper[4877]: I1211 18:29:29.223361 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:29:29 crc kubenswrapper[4877]: E1211 18:29:29.224041 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:29:36 crc kubenswrapper[4877]: I1211 18:29:36.216387 4877 scope.go:117] "RemoveContainer" containerID="39ed2e458ba003d9e4eb28d8ea9e69bfdefe36cd0300a21c857ae42a563608c7" Dec 11 18:29:36 crc kubenswrapper[4877]: E1211 18:29:36.237224 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:29:42 crc kubenswrapper[4877]: I1211 18:29:42.216207 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:29:42 crc kubenswrapper[4877]: E1211 18:29:42.217419 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:29:45 crc kubenswrapper[4877]: I1211 18:29:45.252463 4877 scope.go:117] "RemoveContainer" containerID="fa0f679b4a7bb89bbdc0d4781c54155065ba140b832a69fc2245994f97aa382f" Dec 11 18:29:45 crc kubenswrapper[4877]: I1211 18:29:45.297039 4877 scope.go:117] "RemoveContainer" containerID="9eddaed452cb9848a0511be0a491bbd1d20878cf5ee8e1fbd39d39c2a93fd142" Dec 11 18:29:45 crc kubenswrapper[4877]: I1211 18:29:45.361652 4877 scope.go:117] "RemoveContainer" containerID="b05c2b7b38f37068eb19015fbff2ac4bbc376d0cf6a01bfe5a33e36652620f6a" Dec 11 18:29:45 crc kubenswrapper[4877]: I1211 18:29:45.414188 4877 scope.go:117] "RemoveContainer" containerID="056d9294d137877c00f2775e30ac2ea37bce34b967f8a64cf7971c162afbe526" Dec 11 18:29:45 crc kubenswrapper[4877]: I1211 18:29:45.459690 4877 scope.go:117] "RemoveContainer" containerID="4b7f7b11dd4bb64860113c2af0fea830d8bcda5eb68b312b2be04aff6950b812" Dec 11 18:29:45 crc kubenswrapper[4877]: I1211 18:29:45.500937 4877 scope.go:117] "RemoveContainer" containerID="9da232cc81696c4f0519fbd27d5cce68f2943a6679d5a012a6b07f8aeea5312f" Dec 11 18:29:48 crc kubenswrapper[4877]: I1211 18:29:48.186776 4877 generic.go:334] "Generic (PLEG): container finished" podID="ba5e024b-8ec8-4214-bca2-9dbf57f69623" containerID="98d425b1bfa45bc72968858c35ddfba03875ce917df3e3f4c45244e12dfbd1b2" exitCode=0 Dec 11 18:29:48 crc kubenswrapper[4877]: I1211 18:29:48.186879 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" event={"ID":"ba5e024b-8ec8-4214-bca2-9dbf57f69623","Type":"ContainerDied","Data":"98d425b1bfa45bc72968858c35ddfba03875ce917df3e3f4c45244e12dfbd1b2"} Dec 11 18:29:49 crc kubenswrapper[4877]: I1211 18:29:49.672085 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" Dec 11 18:29:49 crc kubenswrapper[4877]: I1211 18:29:49.815672 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba5e024b-8ec8-4214-bca2-9dbf57f69623-ssh-key\") pod \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\" (UID: \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\") " Dec 11 18:29:49 crc kubenswrapper[4877]: I1211 18:29:49.815946 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvqb7\" (UniqueName: \"kubernetes.io/projected/ba5e024b-8ec8-4214-bca2-9dbf57f69623-kube-api-access-rvqb7\") pod \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\" (UID: \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\") " Dec 11 18:29:49 crc kubenswrapper[4877]: I1211 18:29:49.816284 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba5e024b-8ec8-4214-bca2-9dbf57f69623-inventory\") pod \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\" (UID: \"ba5e024b-8ec8-4214-bca2-9dbf57f69623\") " Dec 11 18:29:49 crc kubenswrapper[4877]: I1211 18:29:49.822967 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5e024b-8ec8-4214-bca2-9dbf57f69623-kube-api-access-rvqb7" (OuterVolumeSpecName: "kube-api-access-rvqb7") pod "ba5e024b-8ec8-4214-bca2-9dbf57f69623" (UID: "ba5e024b-8ec8-4214-bca2-9dbf57f69623"). InnerVolumeSpecName "kube-api-access-rvqb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:29:49 crc kubenswrapper[4877]: I1211 18:29:49.847360 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5e024b-8ec8-4214-bca2-9dbf57f69623-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ba5e024b-8ec8-4214-bca2-9dbf57f69623" (UID: "ba5e024b-8ec8-4214-bca2-9dbf57f69623"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:29:49 crc kubenswrapper[4877]: I1211 18:29:49.876119 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5e024b-8ec8-4214-bca2-9dbf57f69623-inventory" (OuterVolumeSpecName: "inventory") pod "ba5e024b-8ec8-4214-bca2-9dbf57f69623" (UID: "ba5e024b-8ec8-4214-bca2-9dbf57f69623"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:29:49 crc kubenswrapper[4877]: I1211 18:29:49.919424 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba5e024b-8ec8-4214-bca2-9dbf57f69623-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:29:49 crc kubenswrapper[4877]: I1211 18:29:49.919467 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvqb7\" (UniqueName: \"kubernetes.io/projected/ba5e024b-8ec8-4214-bca2-9dbf57f69623-kube-api-access-rvqb7\") on node \"crc\" DevicePath \"\"" Dec 11 18:29:49 crc kubenswrapper[4877]: I1211 18:29:49.919483 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba5e024b-8ec8-4214-bca2-9dbf57f69623-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.215816 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" event={"ID":"ba5e024b-8ec8-4214-bca2-9dbf57f69623","Type":"ContainerDied","Data":"88b7bda39302a02a8cd7f4b18655a0a6610454510b017b7b77ca8f7df56f971b"} Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.215859 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88b7bda39302a02a8cd7f4b18655a0a6610454510b017b7b77ca8f7df56f971b" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.215911 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4twd7" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.333528 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5"] Dec 11 18:29:50 crc kubenswrapper[4877]: E1211 18:29:50.334157 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" containerName="extract-utilities" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.334193 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" containerName="extract-utilities" Dec 11 18:29:50 crc kubenswrapper[4877]: E1211 18:29:50.334226 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" containerName="registry-server" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.334238 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" containerName="registry-server" Dec 11 18:29:50 crc kubenswrapper[4877]: E1211 18:29:50.334260 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" containerName="extract-content" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.334272 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" containerName="extract-content" Dec 11 18:29:50 crc kubenswrapper[4877]: E1211 18:29:50.334308 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5e024b-8ec8-4214-bca2-9dbf57f69623" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.334320 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5e024b-8ec8-4214-bca2-9dbf57f69623" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.334723 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5e024b-8ec8-4214-bca2-9dbf57f69623" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.334760 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c4c6acd-80ba-4d33-9ab0-e8e87f7da5c0" containerName="registry-server" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.336043 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.344219 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.344351 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.344400 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.345257 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.351155 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5"] Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.430872 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z5rj\" (UniqueName: \"kubernetes.io/projected/18c92635-2d69-45d4-b25a-8a67a228e11c-kube-api-access-2z5rj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5\" (UID: \"18c92635-2d69-45d4-b25a-8a67a228e11c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.430925 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18c92635-2d69-45d4-b25a-8a67a228e11c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5\" (UID: \"18c92635-2d69-45d4-b25a-8a67a228e11c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.430955 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18c92635-2d69-45d4-b25a-8a67a228e11c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5\" (UID: \"18c92635-2d69-45d4-b25a-8a67a228e11c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.533444 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z5rj\" (UniqueName: \"kubernetes.io/projected/18c92635-2d69-45d4-b25a-8a67a228e11c-kube-api-access-2z5rj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5\" (UID: \"18c92635-2d69-45d4-b25a-8a67a228e11c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.533494 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18c92635-2d69-45d4-b25a-8a67a228e11c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5\" (UID: \"18c92635-2d69-45d4-b25a-8a67a228e11c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.533524 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18c92635-2d69-45d4-b25a-8a67a228e11c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5\" (UID: \"18c92635-2d69-45d4-b25a-8a67a228e11c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.538668 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18c92635-2d69-45d4-b25a-8a67a228e11c-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5\" (UID: \"18c92635-2d69-45d4-b25a-8a67a228e11c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.538787 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18c92635-2d69-45d4-b25a-8a67a228e11c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5\" (UID: \"18c92635-2d69-45d4-b25a-8a67a228e11c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.550956 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z5rj\" (UniqueName: \"kubernetes.io/projected/18c92635-2d69-45d4-b25a-8a67a228e11c-kube-api-access-2z5rj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5\" (UID: \"18c92635-2d69-45d4-b25a-8a67a228e11c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" Dec 11 18:29:50 crc kubenswrapper[4877]: I1211 18:29:50.661218 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" Dec 11 18:29:51 crc kubenswrapper[4877]: I1211 18:29:51.218251 4877 scope.go:117] "RemoveContainer" containerID="39ed2e458ba003d9e4eb28d8ea9e69bfdefe36cd0300a21c857ae42a563608c7" Dec 11 18:29:51 crc kubenswrapper[4877]: E1211 18:29:51.219150 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:29:51 crc kubenswrapper[4877]: I1211 18:29:51.275096 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5"] Dec 11 18:29:52 crc kubenswrapper[4877]: I1211 18:29:52.241571 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" event={"ID":"18c92635-2d69-45d4-b25a-8a67a228e11c","Type":"ContainerStarted","Data":"f2625315149d711ecdcb252a9aef30d6d7a94c08357ca23408b42b486980e645"} Dec 11 18:29:52 crc kubenswrapper[4877]: I1211 18:29:52.241863 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" event={"ID":"18c92635-2d69-45d4-b25a-8a67a228e11c","Type":"ContainerStarted","Data":"6fe4de735396f731eaa984686aa8928aee4f798874b3501a6ad5b2cc3ddec9b6"} Dec 11 18:29:52 crc kubenswrapper[4877]: I1211 18:29:52.268085 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" podStartSLOduration=1.8265490739999999 podStartE2EDuration="2.26806391s" podCreationTimestamp="2025-12-11 18:29:50 +0000 UTC" firstStartedPulling="2025-12-11 18:29:51.281505576 +0000 UTC m=+1752.307749620" lastFinishedPulling="2025-12-11 18:29:51.723020402 +0000 UTC m=+1752.749264456" observedRunningTime="2025-12-11 18:29:52.257121944 +0000 UTC m=+1753.283366008" watchObservedRunningTime="2025-12-11 18:29:52.26806391 +0000 UTC m=+1753.294307954" Dec 11 18:29:54 crc kubenswrapper[4877]: I1211 18:29:54.217105 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:29:54 crc kubenswrapper[4877]: E1211 18:29:54.218439 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:29:58 crc kubenswrapper[4877]: I1211 18:29:58.328693 4877 generic.go:334] "Generic (PLEG): container finished" podID="18c92635-2d69-45d4-b25a-8a67a228e11c" containerID="f2625315149d711ecdcb252a9aef30d6d7a94c08357ca23408b42b486980e645" exitCode=0 Dec 11 18:29:58 crc kubenswrapper[4877]: I1211 18:29:58.328773 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" event={"ID":"18c92635-2d69-45d4-b25a-8a67a228e11c","Type":"ContainerDied","Data":"f2625315149d711ecdcb252a9aef30d6d7a94c08357ca23408b42b486980e645"} Dec 11 18:29:59 crc kubenswrapper[4877]: I1211 18:29:59.820706 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" Dec 11 18:29:59 crc kubenswrapper[4877]: I1211 18:29:59.896659 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18c92635-2d69-45d4-b25a-8a67a228e11c-inventory\") pod \"18c92635-2d69-45d4-b25a-8a67a228e11c\" (UID: \"18c92635-2d69-45d4-b25a-8a67a228e11c\") " Dec 11 18:29:59 crc kubenswrapper[4877]: I1211 18:29:59.896794 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18c92635-2d69-45d4-b25a-8a67a228e11c-ssh-key\") pod \"18c92635-2d69-45d4-b25a-8a67a228e11c\" (UID: \"18c92635-2d69-45d4-b25a-8a67a228e11c\") " Dec 11 18:29:59 crc kubenswrapper[4877]: I1211 18:29:59.896908 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z5rj\" (UniqueName: \"kubernetes.io/projected/18c92635-2d69-45d4-b25a-8a67a228e11c-kube-api-access-2z5rj\") pod \"18c92635-2d69-45d4-b25a-8a67a228e11c\" (UID: \"18c92635-2d69-45d4-b25a-8a67a228e11c\") " Dec 11 18:29:59 crc kubenswrapper[4877]: I1211 18:29:59.904962 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c92635-2d69-45d4-b25a-8a67a228e11c-kube-api-access-2z5rj" (OuterVolumeSpecName: "kube-api-access-2z5rj") pod "18c92635-2d69-45d4-b25a-8a67a228e11c" (UID: "18c92635-2d69-45d4-b25a-8a67a228e11c"). InnerVolumeSpecName "kube-api-access-2z5rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:29:59 crc kubenswrapper[4877]: I1211 18:29:59.933269 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c92635-2d69-45d4-b25a-8a67a228e11c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "18c92635-2d69-45d4-b25a-8a67a228e11c" (UID: "18c92635-2d69-45d4-b25a-8a67a228e11c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:29:59 crc kubenswrapper[4877]: I1211 18:29:59.943528 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c92635-2d69-45d4-b25a-8a67a228e11c-inventory" (OuterVolumeSpecName: "inventory") pod "18c92635-2d69-45d4-b25a-8a67a228e11c" (UID: "18c92635-2d69-45d4-b25a-8a67a228e11c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:29:59 crc kubenswrapper[4877]: I1211 18:29:59.999865 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18c92635-2d69-45d4-b25a-8a67a228e11c-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:29:59 crc kubenswrapper[4877]: I1211 18:29:59.999904 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18c92635-2d69-45d4-b25a-8a67a228e11c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:29:59 crc kubenswrapper[4877]: I1211 18:29:59.999915 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z5rj\" (UniqueName: \"kubernetes.io/projected/18c92635-2d69-45d4-b25a-8a67a228e11c-kube-api-access-2z5rj\") on node \"crc\" DevicePath \"\"" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.042077 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rc8vk"] Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.050134 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rc8vk"] Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.155760 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg"] Dec 11 18:30:00 crc kubenswrapper[4877]: E1211 18:30:00.156329 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c92635-2d69-45d4-b25a-8a67a228e11c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.156353 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c92635-2d69-45d4-b25a-8a67a228e11c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.156645 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c92635-2d69-45d4-b25a-8a67a228e11c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.157511 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.160721 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.172611 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.174715 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg"] Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.212035 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2802d435-b434-4dc8-9862-7feaef586d64-config-volume\") pod \"collect-profiles-29424630-2qjgg\" (UID: \"2802d435-b434-4dc8-9862-7feaef586d64\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.212196 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g47bp\" (UniqueName: \"kubernetes.io/projected/2802d435-b434-4dc8-9862-7feaef586d64-kube-api-access-g47bp\") pod \"collect-profiles-29424630-2qjgg\" (UID: \"2802d435-b434-4dc8-9862-7feaef586d64\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.212277 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2802d435-b434-4dc8-9862-7feaef586d64-secret-volume\") pod \"collect-profiles-29424630-2qjgg\" (UID: \"2802d435-b434-4dc8-9862-7feaef586d64\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.315344 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2802d435-b434-4dc8-9862-7feaef586d64-config-volume\") pod \"collect-profiles-29424630-2qjgg\" (UID: \"2802d435-b434-4dc8-9862-7feaef586d64\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.315762 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g47bp\" (UniqueName: \"kubernetes.io/projected/2802d435-b434-4dc8-9862-7feaef586d64-kube-api-access-g47bp\") pod \"collect-profiles-29424630-2qjgg\" (UID: \"2802d435-b434-4dc8-9862-7feaef586d64\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.315819 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2802d435-b434-4dc8-9862-7feaef586d64-secret-volume\") pod \"collect-profiles-29424630-2qjgg\" (UID: \"2802d435-b434-4dc8-9862-7feaef586d64\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.316249 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2802d435-b434-4dc8-9862-7feaef586d64-config-volume\") pod \"collect-profiles-29424630-2qjgg\" (UID: \"2802d435-b434-4dc8-9862-7feaef586d64\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.323181 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2802d435-b434-4dc8-9862-7feaef586d64-secret-volume\") pod \"collect-profiles-29424630-2qjgg\" (UID: \"2802d435-b434-4dc8-9862-7feaef586d64\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.334938 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g47bp\" (UniqueName: \"kubernetes.io/projected/2802d435-b434-4dc8-9862-7feaef586d64-kube-api-access-g47bp\") pod \"collect-profiles-29424630-2qjgg\" (UID: \"2802d435-b434-4dc8-9862-7feaef586d64\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.350159 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" event={"ID":"18c92635-2d69-45d4-b25a-8a67a228e11c","Type":"ContainerDied","Data":"6fe4de735396f731eaa984686aa8928aee4f798874b3501a6ad5b2cc3ddec9b6"} Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.350220 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fe4de735396f731eaa984686aa8928aee4f798874b3501a6ad5b2cc3ddec9b6" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.350527 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.461437 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr"] Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.463403 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.467615 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.468009 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.468214 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.468426 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.475600 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr"] Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.487296 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.520542 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c68cr\" (UID: \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.520660 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c68cr\" (UID: \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.520753 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvfsl\" (UniqueName: \"kubernetes.io/projected/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-kube-api-access-tvfsl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c68cr\" (UID: \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.623325 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c68cr\" (UID: \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.623503 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c68cr\" (UID: \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.623650 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvfsl\" (UniqueName: \"kubernetes.io/projected/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-kube-api-access-tvfsl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c68cr\" (UID: \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.631552 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c68cr\" (UID: \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.631594 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c68cr\" (UID: \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.648091 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvfsl\" (UniqueName: \"kubernetes.io/projected/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-kube-api-access-tvfsl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c68cr\" (UID: \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.793265 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" Dec 11 18:30:00 crc kubenswrapper[4877]: I1211 18:30:00.951186 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg"] Dec 11 18:30:01 crc kubenswrapper[4877]: I1211 18:30:01.232948 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6503cc3e-c36a-4a9d-aced-5c5e4a2fde80" path="/var/lib/kubelet/pods/6503cc3e-c36a-4a9d-aced-5c5e4a2fde80/volumes" Dec 11 18:30:01 crc kubenswrapper[4877]: I1211 18:30:01.361142 4877 generic.go:334] "Generic (PLEG): container finished" podID="2802d435-b434-4dc8-9862-7feaef586d64" containerID="d82bcb1be32826fbb1a4e0abefb835e2a4d40f2d18d8e22106f7c00825a03dbe" exitCode=0 Dec 11 18:30:01 crc kubenswrapper[4877]: I1211 18:30:01.361212 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" event={"ID":"2802d435-b434-4dc8-9862-7feaef586d64","Type":"ContainerDied","Data":"d82bcb1be32826fbb1a4e0abefb835e2a4d40f2d18d8e22106f7c00825a03dbe"} Dec 11 18:30:01 crc kubenswrapper[4877]: I1211 18:30:01.361283 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" event={"ID":"2802d435-b434-4dc8-9862-7feaef586d64","Type":"ContainerStarted","Data":"0d4f67262ec7d6bd29d8b59d0418e6552827384a3b735eaf66e237d80993724a"} Dec 11 18:30:01 crc kubenswrapper[4877]: I1211 18:30:01.392852 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr"] Dec 11 18:30:01 crc kubenswrapper[4877]: W1211 18:30:01.405903 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd857578_73bd_4b2b_b7ba_0b6a7058b48e.slice/crio-325ae1da5250ede6c8846835ca524910dd0def1c2f511f70be6b8cd53bc1f53b WatchSource:0}: Error finding container 325ae1da5250ede6c8846835ca524910dd0def1c2f511f70be6b8cd53bc1f53b: Status 404 returned error can't find the container with id 325ae1da5250ede6c8846835ca524910dd0def1c2f511f70be6b8cd53bc1f53b Dec 11 18:30:02 crc kubenswrapper[4877]: I1211 18:30:02.221104 4877 scope.go:117] "RemoveContainer" containerID="39ed2e458ba003d9e4eb28d8ea9e69bfdefe36cd0300a21c857ae42a563608c7" Dec 11 18:30:02 crc kubenswrapper[4877]: E1211 18:30:02.221812 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:30:02 crc kubenswrapper[4877]: I1211 18:30:02.374712 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" event={"ID":"cd857578-73bd-4b2b-b7ba-0b6a7058b48e","Type":"ContainerStarted","Data":"325ae1da5250ede6c8846835ca524910dd0def1c2f511f70be6b8cd53bc1f53b"} Dec 11 18:30:02 crc kubenswrapper[4877]: I1211 18:30:02.775004 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" Dec 11 18:30:02 crc kubenswrapper[4877]: I1211 18:30:02.888282 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2802d435-b434-4dc8-9862-7feaef586d64-secret-volume\") pod \"2802d435-b434-4dc8-9862-7feaef586d64\" (UID: \"2802d435-b434-4dc8-9862-7feaef586d64\") " Dec 11 18:30:02 crc kubenswrapper[4877]: I1211 18:30:02.888390 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g47bp\" (UniqueName: \"kubernetes.io/projected/2802d435-b434-4dc8-9862-7feaef586d64-kube-api-access-g47bp\") pod \"2802d435-b434-4dc8-9862-7feaef586d64\" (UID: \"2802d435-b434-4dc8-9862-7feaef586d64\") " Dec 11 18:30:02 crc kubenswrapper[4877]: I1211 18:30:02.888426 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2802d435-b434-4dc8-9862-7feaef586d64-config-volume\") pod \"2802d435-b434-4dc8-9862-7feaef586d64\" (UID: \"2802d435-b434-4dc8-9862-7feaef586d64\") " Dec 11 18:30:02 crc kubenswrapper[4877]: I1211 18:30:02.890278 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2802d435-b434-4dc8-9862-7feaef586d64-config-volume" (OuterVolumeSpecName: "config-volume") pod "2802d435-b434-4dc8-9862-7feaef586d64" (UID: "2802d435-b434-4dc8-9862-7feaef586d64"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:30:02 crc kubenswrapper[4877]: I1211 18:30:02.890601 4877 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2802d435-b434-4dc8-9862-7feaef586d64-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 18:30:02 crc kubenswrapper[4877]: I1211 18:30:02.896272 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2802d435-b434-4dc8-9862-7feaef586d64-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2802d435-b434-4dc8-9862-7feaef586d64" (UID: "2802d435-b434-4dc8-9862-7feaef586d64"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:30:02 crc kubenswrapper[4877]: I1211 18:30:02.909193 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2802d435-b434-4dc8-9862-7feaef586d64-kube-api-access-g47bp" (OuterVolumeSpecName: "kube-api-access-g47bp") pod "2802d435-b434-4dc8-9862-7feaef586d64" (UID: "2802d435-b434-4dc8-9862-7feaef586d64"). InnerVolumeSpecName "kube-api-access-g47bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:30:02 crc kubenswrapper[4877]: I1211 18:30:02.992203 4877 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2802d435-b434-4dc8-9862-7feaef586d64-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 18:30:02 crc kubenswrapper[4877]: I1211 18:30:02.992242 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g47bp\" (UniqueName: \"kubernetes.io/projected/2802d435-b434-4dc8-9862-7feaef586d64-kube-api-access-g47bp\") on node \"crc\" DevicePath \"\"" Dec 11 18:30:03 crc kubenswrapper[4877]: I1211 18:30:03.389321 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" Dec 11 18:30:03 crc kubenswrapper[4877]: I1211 18:30:03.389329 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg" event={"ID":"2802d435-b434-4dc8-9862-7feaef586d64","Type":"ContainerDied","Data":"0d4f67262ec7d6bd29d8b59d0418e6552827384a3b735eaf66e237d80993724a"} Dec 11 18:30:03 crc kubenswrapper[4877]: I1211 18:30:03.389439 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d4f67262ec7d6bd29d8b59d0418e6552827384a3b735eaf66e237d80993724a" Dec 11 18:30:03 crc kubenswrapper[4877]: I1211 18:30:03.393533 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" event={"ID":"cd857578-73bd-4b2b-b7ba-0b6a7058b48e","Type":"ContainerStarted","Data":"cd07f2c5948efbaa3c6feb4fa98a9233be26244dbce0855f2f383c8f4fd7288c"} Dec 11 18:30:03 crc kubenswrapper[4877]: I1211 18:30:03.443114 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" podStartSLOduration=2.729895092 podStartE2EDuration="3.443078216s" podCreationTimestamp="2025-12-11 18:30:00 +0000 UTC" firstStartedPulling="2025-12-11 18:30:01.411141984 +0000 UTC m=+1762.437386028" lastFinishedPulling="2025-12-11 18:30:02.124325078 +0000 UTC m=+1763.150569152" observedRunningTime="2025-12-11 18:30:03.423101235 +0000 UTC m=+1764.449345299" watchObservedRunningTime="2025-12-11 18:30:03.443078216 +0000 UTC m=+1764.469322290" Dec 11 18:30:07 crc kubenswrapper[4877]: I1211 18:30:07.216814 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:30:07 crc kubenswrapper[4877]: E1211 18:30:07.217967 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:30:16 crc kubenswrapper[4877]: I1211 18:30:16.215755 4877 scope.go:117] "RemoveContainer" containerID="39ed2e458ba003d9e4eb28d8ea9e69bfdefe36cd0300a21c857ae42a563608c7" Dec 11 18:30:16 crc kubenswrapper[4877]: E1211 18:30:16.216782 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:30:19 crc kubenswrapper[4877]: I1211 18:30:19.227958 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:30:19 crc kubenswrapper[4877]: E1211 18:30:19.229196 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:30:22 crc kubenswrapper[4877]: I1211 18:30:22.058729 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-g9c4v"] Dec 11 18:30:22 crc kubenswrapper[4877]: I1211 18:30:22.072120 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-g9c4v"] Dec 11 18:30:23 crc kubenswrapper[4877]: I1211 18:30:23.051460 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-57l9c"] Dec 11 18:30:23 crc kubenswrapper[4877]: I1211 18:30:23.062346 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-57l9c"] Dec 11 18:30:23 crc kubenswrapper[4877]: I1211 18:30:23.232246 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b01deb5d-7223-4abd-8bd9-502ddb0d74df" path="/var/lib/kubelet/pods/b01deb5d-7223-4abd-8bd9-502ddb0d74df/volumes" Dec 11 18:30:23 crc kubenswrapper[4877]: I1211 18:30:23.233837 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb9ba5f-800f-4449-8005-964f8e74f7e5" path="/var/lib/kubelet/pods/ffb9ba5f-800f-4449-8005-964f8e74f7e5/volumes" Dec 11 18:30:30 crc kubenswrapper[4877]: I1211 18:30:30.217470 4877 scope.go:117] "RemoveContainer" containerID="39ed2e458ba003d9e4eb28d8ea9e69bfdefe36cd0300a21c857ae42a563608c7" Dec 11 18:30:30 crc kubenswrapper[4877]: E1211 18:30:30.218607 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:30:34 crc kubenswrapper[4877]: I1211 18:30:34.216656 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:30:34 crc kubenswrapper[4877]: E1211 18:30:34.217409 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:30:43 crc kubenswrapper[4877]: I1211 18:30:43.855136 4877 generic.go:334] "Generic (PLEG): container finished" podID="cd857578-73bd-4b2b-b7ba-0b6a7058b48e" containerID="cd07f2c5948efbaa3c6feb4fa98a9233be26244dbce0855f2f383c8f4fd7288c" exitCode=0 Dec 11 18:30:43 crc kubenswrapper[4877]: I1211 18:30:43.855366 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" event={"ID":"cd857578-73bd-4b2b-b7ba-0b6a7058b48e","Type":"ContainerDied","Data":"cd07f2c5948efbaa3c6feb4fa98a9233be26244dbce0855f2f383c8f4fd7288c"} Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.223620 4877 scope.go:117] "RemoveContainer" containerID="39ed2e458ba003d9e4eb28d8ea9e69bfdefe36cd0300a21c857ae42a563608c7" Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.744003 4877 scope.go:117] "RemoveContainer" containerID="6d55f177eb699e867a14a3a616aa73d27e9f1ea496f2d91a6de60a3c553307dd" Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.849494 4877 scope.go:117] "RemoveContainer" containerID="4c964521ffa0c27b319fdd69ac8c5eeff09323d6c16c8e8544b468f8b940354a" Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.865674 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.887679 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerStarted","Data":"1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf"} Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.888087 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.893547 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.893549 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c68cr" event={"ID":"cd857578-73bd-4b2b-b7ba-0b6a7058b48e","Type":"ContainerDied","Data":"325ae1da5250ede6c8846835ca524910dd0def1c2f511f70be6b8cd53bc1f53b"} Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.894319 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="325ae1da5250ede6c8846835ca524910dd0def1c2f511f70be6b8cd53bc1f53b" Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.959049 4877 scope.go:117] "RemoveContainer" containerID="28b365018e4f5a1b9661377ca5a8b8408a8a43019f057240e199ef74868df64d" Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.996317 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck"] Dec 11 18:30:45 crc kubenswrapper[4877]: E1211 18:30:45.997039 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2802d435-b434-4dc8-9862-7feaef586d64" containerName="collect-profiles" Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.997058 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="2802d435-b434-4dc8-9862-7feaef586d64" containerName="collect-profiles" Dec 11 18:30:45 crc kubenswrapper[4877]: E1211 18:30:45.997098 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd857578-73bd-4b2b-b7ba-0b6a7058b48e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.997106 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd857578-73bd-4b2b-b7ba-0b6a7058b48e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.997349 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="2802d435-b434-4dc8-9862-7feaef586d64" containerName="collect-profiles" Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.997402 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd857578-73bd-4b2b-b7ba-0b6a7058b48e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 11 18:30:45 crc kubenswrapper[4877]: I1211 18:30:45.998326 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.012901 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck"] Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.013640 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvfsl\" (UniqueName: \"kubernetes.io/projected/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-kube-api-access-tvfsl\") pod \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\" (UID: \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\") " Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.015008 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-inventory\") pod \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\" (UID: \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\") " Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.015113 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-ssh-key\") pod \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\" (UID: \"cd857578-73bd-4b2b-b7ba-0b6a7058b48e\") " Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.022765 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-kube-api-access-tvfsl" (OuterVolumeSpecName: "kube-api-access-tvfsl") pod "cd857578-73bd-4b2b-b7ba-0b6a7058b48e" (UID: "cd857578-73bd-4b2b-b7ba-0b6a7058b48e"). InnerVolumeSpecName "kube-api-access-tvfsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.043047 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cd857578-73bd-4b2b-b7ba-0b6a7058b48e" (UID: "cd857578-73bd-4b2b-b7ba-0b6a7058b48e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.044527 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-inventory" (OuterVolumeSpecName: "inventory") pod "cd857578-73bd-4b2b-b7ba-0b6a7058b48e" (UID: "cd857578-73bd-4b2b-b7ba-0b6a7058b48e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.119171 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbkb4\" (UniqueName: \"kubernetes.io/projected/728cbe41-aead-4492-bed9-312b93b70b88-kube-api-access-hbkb4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-js6ck\" (UID: \"728cbe41-aead-4492-bed9-312b93b70b88\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.119230 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/728cbe41-aead-4492-bed9-312b93b70b88-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-js6ck\" (UID: \"728cbe41-aead-4492-bed9-312b93b70b88\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.119503 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/728cbe41-aead-4492-bed9-312b93b70b88-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-js6ck\" (UID: \"728cbe41-aead-4492-bed9-312b93b70b88\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.119939 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.119956 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.119967 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvfsl\" (UniqueName: \"kubernetes.io/projected/cd857578-73bd-4b2b-b7ba-0b6a7058b48e-kube-api-access-tvfsl\") on node \"crc\" DevicePath \"\"" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.221561 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/728cbe41-aead-4492-bed9-312b93b70b88-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-js6ck\" (UID: \"728cbe41-aead-4492-bed9-312b93b70b88\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.221686 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/728cbe41-aead-4492-bed9-312b93b70b88-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-js6ck\" (UID: \"728cbe41-aead-4492-bed9-312b93b70b88\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.221850 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbkb4\" (UniqueName: \"kubernetes.io/projected/728cbe41-aead-4492-bed9-312b93b70b88-kube-api-access-hbkb4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-js6ck\" (UID: \"728cbe41-aead-4492-bed9-312b93b70b88\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.226882 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/728cbe41-aead-4492-bed9-312b93b70b88-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-js6ck\" (UID: \"728cbe41-aead-4492-bed9-312b93b70b88\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.229341 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/728cbe41-aead-4492-bed9-312b93b70b88-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-js6ck\" (UID: \"728cbe41-aead-4492-bed9-312b93b70b88\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.249956 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbkb4\" (UniqueName: \"kubernetes.io/projected/728cbe41-aead-4492-bed9-312b93b70b88-kube-api-access-hbkb4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-js6ck\" (UID: \"728cbe41-aead-4492-bed9-312b93b70b88\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.325285 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.774742 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck"] Dec 11 18:30:46 crc kubenswrapper[4877]: I1211 18:30:46.911038 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" event={"ID":"728cbe41-aead-4492-bed9-312b93b70b88","Type":"ContainerStarted","Data":"965b2c5a242fdeaccd33dcbf4aa417d1a47365640b493c730e77b3d429f27886"} Dec 11 18:30:47 crc kubenswrapper[4877]: I1211 18:30:47.923985 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" event={"ID":"728cbe41-aead-4492-bed9-312b93b70b88","Type":"ContainerStarted","Data":"ce648e34b039fdf09239b2bcd7b79feeee661a0d2eb2bc086d7a6fd01bc68fd0"} Dec 11 18:30:47 crc kubenswrapper[4877]: I1211 18:30:47.959346 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" podStartSLOduration=2.47649687 podStartE2EDuration="2.959320075s" podCreationTimestamp="2025-12-11 18:30:45 +0000 UTC" firstStartedPulling="2025-12-11 18:30:46.776989564 +0000 UTC m=+1807.803233608" lastFinishedPulling="2025-12-11 18:30:47.259812759 +0000 UTC m=+1808.286056813" observedRunningTime="2025-12-11 18:30:47.949987592 +0000 UTC m=+1808.976231686" watchObservedRunningTime="2025-12-11 18:30:47.959320075 +0000 UTC m=+1808.985564159" Dec 11 18:30:49 crc kubenswrapper[4877]: I1211 18:30:49.233136 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:30:49 crc kubenswrapper[4877]: E1211 18:30:49.234443 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:30:51 crc kubenswrapper[4877]: I1211 18:30:51.147625 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:31:04 crc kubenswrapper[4877]: I1211 18:31:04.215827 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:31:04 crc kubenswrapper[4877]: E1211 18:31:04.216964 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:31:15 crc kubenswrapper[4877]: I1211 18:31:15.216495 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:31:15 crc kubenswrapper[4877]: E1211 18:31:15.218026 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:31:29 crc kubenswrapper[4877]: I1211 18:31:29.056531 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-dpqp2"] Dec 11 18:31:29 crc kubenswrapper[4877]: I1211 18:31:29.070849 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-dpqp2"] Dec 11 18:31:29 crc kubenswrapper[4877]: I1211 18:31:29.223295 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:31:29 crc kubenswrapper[4877]: E1211 18:31:29.223843 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:31:29 crc kubenswrapper[4877]: I1211 18:31:29.229411 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dffdec34-8bf1-4c69-80b3-dd26a191e1d8" path="/var/lib/kubelet/pods/dffdec34-8bf1-4c69-80b3-dd26a191e1d8/volumes" Dec 11 18:31:41 crc kubenswrapper[4877]: I1211 18:31:41.216938 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:31:41 crc kubenswrapper[4877]: E1211 18:31:41.217857 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:31:45 crc kubenswrapper[4877]: I1211 18:31:45.598677 4877 generic.go:334] "Generic (PLEG): container finished" podID="728cbe41-aead-4492-bed9-312b93b70b88" containerID="ce648e34b039fdf09239b2bcd7b79feeee661a0d2eb2bc086d7a6fd01bc68fd0" exitCode=0 Dec 11 18:31:45 crc kubenswrapper[4877]: I1211 18:31:45.598822 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" event={"ID":"728cbe41-aead-4492-bed9-312b93b70b88","Type":"ContainerDied","Data":"ce648e34b039fdf09239b2bcd7b79feeee661a0d2eb2bc086d7a6fd01bc68fd0"} Dec 11 18:31:46 crc kubenswrapper[4877]: I1211 18:31:46.090042 4877 scope.go:117] "RemoveContainer" containerID="543ec38e8a3e8e19587f1e408d4710c214d2ecdca6ca9cf643b0d8303ab688f9" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.137678 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.247017 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbkb4\" (UniqueName: \"kubernetes.io/projected/728cbe41-aead-4492-bed9-312b93b70b88-kube-api-access-hbkb4\") pod \"728cbe41-aead-4492-bed9-312b93b70b88\" (UID: \"728cbe41-aead-4492-bed9-312b93b70b88\") " Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.247262 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/728cbe41-aead-4492-bed9-312b93b70b88-inventory\") pod \"728cbe41-aead-4492-bed9-312b93b70b88\" (UID: \"728cbe41-aead-4492-bed9-312b93b70b88\") " Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.247307 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/728cbe41-aead-4492-bed9-312b93b70b88-ssh-key\") pod \"728cbe41-aead-4492-bed9-312b93b70b88\" (UID: \"728cbe41-aead-4492-bed9-312b93b70b88\") " Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.255341 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/728cbe41-aead-4492-bed9-312b93b70b88-kube-api-access-hbkb4" (OuterVolumeSpecName: "kube-api-access-hbkb4") pod "728cbe41-aead-4492-bed9-312b93b70b88" (UID: "728cbe41-aead-4492-bed9-312b93b70b88"). InnerVolumeSpecName "kube-api-access-hbkb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.281194 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728cbe41-aead-4492-bed9-312b93b70b88-inventory" (OuterVolumeSpecName: "inventory") pod "728cbe41-aead-4492-bed9-312b93b70b88" (UID: "728cbe41-aead-4492-bed9-312b93b70b88"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.294208 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728cbe41-aead-4492-bed9-312b93b70b88-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "728cbe41-aead-4492-bed9-312b93b70b88" (UID: "728cbe41-aead-4492-bed9-312b93b70b88"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.350653 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/728cbe41-aead-4492-bed9-312b93b70b88-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.350709 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/728cbe41-aead-4492-bed9-312b93b70b88-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.350733 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbkb4\" (UniqueName: \"kubernetes.io/projected/728cbe41-aead-4492-bed9-312b93b70b88-kube-api-access-hbkb4\") on node \"crc\" DevicePath \"\"" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.628180 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" event={"ID":"728cbe41-aead-4492-bed9-312b93b70b88","Type":"ContainerDied","Data":"965b2c5a242fdeaccd33dcbf4aa417d1a47365640b493c730e77b3d429f27886"} Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.628226 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="965b2c5a242fdeaccd33dcbf4aa417d1a47365640b493c730e77b3d429f27886" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.628286 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-js6ck" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.750852 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-b9jsb"] Dec 11 18:31:47 crc kubenswrapper[4877]: E1211 18:31:47.751639 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="728cbe41-aead-4492-bed9-312b93b70b88" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.751680 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="728cbe41-aead-4492-bed9-312b93b70b88" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.752092 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="728cbe41-aead-4492-bed9-312b93b70b88" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.753086 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.757442 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.757507 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.757739 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.760738 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.772870 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-b9jsb"] Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.860945 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2w8t\" (UniqueName: \"kubernetes.io/projected/a2ef07f1-1d19-403b-a68c-0092e8030adb-kube-api-access-f2w8t\") pod \"ssh-known-hosts-edpm-deployment-b9jsb\" (UID: \"a2ef07f1-1d19-403b-a68c-0092e8030adb\") " pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.861051 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a2ef07f1-1d19-403b-a68c-0092e8030adb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-b9jsb\" (UID: \"a2ef07f1-1d19-403b-a68c-0092e8030adb\") " pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.861314 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2ef07f1-1d19-403b-a68c-0092e8030adb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-b9jsb\" (UID: \"a2ef07f1-1d19-403b-a68c-0092e8030adb\") " pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.965058 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2w8t\" (UniqueName: \"kubernetes.io/projected/a2ef07f1-1d19-403b-a68c-0092e8030adb-kube-api-access-f2w8t\") pod \"ssh-known-hosts-edpm-deployment-b9jsb\" (UID: \"a2ef07f1-1d19-403b-a68c-0092e8030adb\") " pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.965818 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a2ef07f1-1d19-403b-a68c-0092e8030adb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-b9jsb\" (UID: \"a2ef07f1-1d19-403b-a68c-0092e8030adb\") " pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.965970 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2ef07f1-1d19-403b-a68c-0092e8030adb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-b9jsb\" (UID: \"a2ef07f1-1d19-403b-a68c-0092e8030adb\") " pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.972060 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2ef07f1-1d19-403b-a68c-0092e8030adb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-b9jsb\" (UID: \"a2ef07f1-1d19-403b-a68c-0092e8030adb\") " pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.972884 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a2ef07f1-1d19-403b-a68c-0092e8030adb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-b9jsb\" (UID: \"a2ef07f1-1d19-403b-a68c-0092e8030adb\") " pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" Dec 11 18:31:47 crc kubenswrapper[4877]: I1211 18:31:47.997319 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2w8t\" (UniqueName: \"kubernetes.io/projected/a2ef07f1-1d19-403b-a68c-0092e8030adb-kube-api-access-f2w8t\") pod \"ssh-known-hosts-edpm-deployment-b9jsb\" (UID: \"a2ef07f1-1d19-403b-a68c-0092e8030adb\") " pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" Dec 11 18:31:48 crc kubenswrapper[4877]: I1211 18:31:48.100464 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" Dec 11 18:31:48 crc kubenswrapper[4877]: I1211 18:31:48.741047 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-b9jsb"] Dec 11 18:31:48 crc kubenswrapper[4877]: I1211 18:31:48.749410 4877 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 18:31:49 crc kubenswrapper[4877]: I1211 18:31:49.649822 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" event={"ID":"a2ef07f1-1d19-403b-a68c-0092e8030adb","Type":"ContainerStarted","Data":"d3ef91eeb354bd4502d872df524f3d890c2dbe2ccca88fc3b6faa950fdeb2424"} Dec 11 18:31:49 crc kubenswrapper[4877]: I1211 18:31:49.650235 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" event={"ID":"a2ef07f1-1d19-403b-a68c-0092e8030adb","Type":"ContainerStarted","Data":"9157eab180a544a2ae6dab20d5a59eadea160507618e76cfe1aa77b90c217f1c"} Dec 11 18:31:49 crc kubenswrapper[4877]: I1211 18:31:49.680046 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" podStartSLOduration=2.208730233 podStartE2EDuration="2.680020396s" podCreationTimestamp="2025-12-11 18:31:47 +0000 UTC" firstStartedPulling="2025-12-11 18:31:48.749113714 +0000 UTC m=+1869.775357778" lastFinishedPulling="2025-12-11 18:31:49.220403897 +0000 UTC m=+1870.246647941" observedRunningTime="2025-12-11 18:31:49.670658062 +0000 UTC m=+1870.696902106" watchObservedRunningTime="2025-12-11 18:31:49.680020396 +0000 UTC m=+1870.706264440" Dec 11 18:31:54 crc kubenswrapper[4877]: I1211 18:31:54.215231 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:31:54 crc kubenswrapper[4877]: E1211 18:31:54.216137 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:31:57 crc kubenswrapper[4877]: I1211 18:31:57.740341 4877 generic.go:334] "Generic (PLEG): container finished" podID="a2ef07f1-1d19-403b-a68c-0092e8030adb" containerID="d3ef91eeb354bd4502d872df524f3d890c2dbe2ccca88fc3b6faa950fdeb2424" exitCode=0 Dec 11 18:31:57 crc kubenswrapper[4877]: I1211 18:31:57.740439 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" event={"ID":"a2ef07f1-1d19-403b-a68c-0092e8030adb","Type":"ContainerDied","Data":"d3ef91eeb354bd4502d872df524f3d890c2dbe2ccca88fc3b6faa950fdeb2424"} Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.258419 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.343970 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2ef07f1-1d19-403b-a68c-0092e8030adb-ssh-key-openstack-edpm-ipam\") pod \"a2ef07f1-1d19-403b-a68c-0092e8030adb\" (UID: \"a2ef07f1-1d19-403b-a68c-0092e8030adb\") " Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.344118 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a2ef07f1-1d19-403b-a68c-0092e8030adb-inventory-0\") pod \"a2ef07f1-1d19-403b-a68c-0092e8030adb\" (UID: \"a2ef07f1-1d19-403b-a68c-0092e8030adb\") " Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.344167 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2w8t\" (UniqueName: \"kubernetes.io/projected/a2ef07f1-1d19-403b-a68c-0092e8030adb-kube-api-access-f2w8t\") pod \"a2ef07f1-1d19-403b-a68c-0092e8030adb\" (UID: \"a2ef07f1-1d19-403b-a68c-0092e8030adb\") " Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.354215 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ef07f1-1d19-403b-a68c-0092e8030adb-kube-api-access-f2w8t" (OuterVolumeSpecName: "kube-api-access-f2w8t") pod "a2ef07f1-1d19-403b-a68c-0092e8030adb" (UID: "a2ef07f1-1d19-403b-a68c-0092e8030adb"). InnerVolumeSpecName "kube-api-access-f2w8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.396757 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ef07f1-1d19-403b-a68c-0092e8030adb-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "a2ef07f1-1d19-403b-a68c-0092e8030adb" (UID: "a2ef07f1-1d19-403b-a68c-0092e8030adb"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.396783 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ef07f1-1d19-403b-a68c-0092e8030adb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a2ef07f1-1d19-403b-a68c-0092e8030adb" (UID: "a2ef07f1-1d19-403b-a68c-0092e8030adb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.447597 4877 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a2ef07f1-1d19-403b-a68c-0092e8030adb-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.447664 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2w8t\" (UniqueName: \"kubernetes.io/projected/a2ef07f1-1d19-403b-a68c-0092e8030adb-kube-api-access-f2w8t\") on node \"crc\" DevicePath \"\"" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.447693 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2ef07f1-1d19-403b-a68c-0092e8030adb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.768919 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" event={"ID":"a2ef07f1-1d19-403b-a68c-0092e8030adb","Type":"ContainerDied","Data":"9157eab180a544a2ae6dab20d5a59eadea160507618e76cfe1aa77b90c217f1c"} Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.769273 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9157eab180a544a2ae6dab20d5a59eadea160507618e76cfe1aa77b90c217f1c" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.769013 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-b9jsb" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.876335 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7"] Dec 11 18:31:59 crc kubenswrapper[4877]: E1211 18:31:59.880315 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ef07f1-1d19-403b-a68c-0092e8030adb" containerName="ssh-known-hosts-edpm-deployment" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.880340 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ef07f1-1d19-403b-a68c-0092e8030adb" containerName="ssh-known-hosts-edpm-deployment" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.880888 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2ef07f1-1d19-403b-a68c-0092e8030adb" containerName="ssh-known-hosts-edpm-deployment" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.881843 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.885768 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.888287 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.888693 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.888718 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.910793 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7"] Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.961556 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sftrx\" (UniqueName: \"kubernetes.io/projected/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-kube-api-access-sftrx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b4lh7\" (UID: \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.961623 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b4lh7\" (UID: \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" Dec 11 18:31:59 crc kubenswrapper[4877]: I1211 18:31:59.961680 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b4lh7\" (UID: \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" Dec 11 18:32:00 crc kubenswrapper[4877]: I1211 18:32:00.063693 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b4lh7\" (UID: \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" Dec 11 18:32:00 crc kubenswrapper[4877]: I1211 18:32:00.063762 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sftrx\" (UniqueName: \"kubernetes.io/projected/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-kube-api-access-sftrx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b4lh7\" (UID: \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" Dec 11 18:32:00 crc kubenswrapper[4877]: I1211 18:32:00.063800 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b4lh7\" (UID: \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" Dec 11 18:32:00 crc kubenswrapper[4877]: I1211 18:32:00.068574 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b4lh7\" (UID: \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" Dec 11 18:32:00 crc kubenswrapper[4877]: I1211 18:32:00.068907 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b4lh7\" (UID: \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" Dec 11 18:32:00 crc kubenswrapper[4877]: I1211 18:32:00.087019 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sftrx\" (UniqueName: \"kubernetes.io/projected/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-kube-api-access-sftrx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b4lh7\" (UID: \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" Dec 11 18:32:00 crc kubenswrapper[4877]: I1211 18:32:00.216134 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" Dec 11 18:32:00 crc kubenswrapper[4877]: I1211 18:32:00.583997 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7"] Dec 11 18:32:00 crc kubenswrapper[4877]: W1211 18:32:00.590784 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6dc6f9d_c3d7_45da_98ca_e00538c9680e.slice/crio-b646362ece38ec6316859d49661a808e5299ff75e58af54570ea58ff36e32523 WatchSource:0}: Error finding container b646362ece38ec6316859d49661a808e5299ff75e58af54570ea58ff36e32523: Status 404 returned error can't find the container with id b646362ece38ec6316859d49661a808e5299ff75e58af54570ea58ff36e32523 Dec 11 18:32:00 crc kubenswrapper[4877]: I1211 18:32:00.793535 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" event={"ID":"f6dc6f9d-c3d7-45da-98ca-e00538c9680e","Type":"ContainerStarted","Data":"b646362ece38ec6316859d49661a808e5299ff75e58af54570ea58ff36e32523"} Dec 11 18:32:01 crc kubenswrapper[4877]: I1211 18:32:01.807106 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" event={"ID":"f6dc6f9d-c3d7-45da-98ca-e00538c9680e","Type":"ContainerStarted","Data":"9586f80799aca47a86d102e9a8b0ae9d4c2450b63a367c57a109c13fa1ac08b9"} Dec 11 18:32:01 crc kubenswrapper[4877]: I1211 18:32:01.840084 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" podStartSLOduration=2.277110742 podStartE2EDuration="2.840052476s" podCreationTimestamp="2025-12-11 18:31:59 +0000 UTC" firstStartedPulling="2025-12-11 18:32:00.594018403 +0000 UTC m=+1881.620262467" lastFinishedPulling="2025-12-11 18:32:01.156960127 +0000 UTC m=+1882.183204201" observedRunningTime="2025-12-11 18:32:01.834949703 +0000 UTC m=+1882.861193747" watchObservedRunningTime="2025-12-11 18:32:01.840052476 +0000 UTC m=+1882.866296560" Dec 11 18:32:05 crc kubenswrapper[4877]: I1211 18:32:05.216284 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:32:05 crc kubenswrapper[4877]: E1211 18:32:05.217696 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:32:10 crc kubenswrapper[4877]: I1211 18:32:10.912123 4877 generic.go:334] "Generic (PLEG): container finished" podID="f6dc6f9d-c3d7-45da-98ca-e00538c9680e" containerID="9586f80799aca47a86d102e9a8b0ae9d4c2450b63a367c57a109c13fa1ac08b9" exitCode=0 Dec 11 18:32:10 crc kubenswrapper[4877]: I1211 18:32:10.912191 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" event={"ID":"f6dc6f9d-c3d7-45da-98ca-e00538c9680e","Type":"ContainerDied","Data":"9586f80799aca47a86d102e9a8b0ae9d4c2450b63a367c57a109c13fa1ac08b9"} Dec 11 18:32:12 crc kubenswrapper[4877]: I1211 18:32:12.582675 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" Dec 11 18:32:12 crc kubenswrapper[4877]: I1211 18:32:12.685648 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-inventory\") pod \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\" (UID: \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\") " Dec 11 18:32:12 crc kubenswrapper[4877]: I1211 18:32:12.685807 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-ssh-key\") pod \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\" (UID: \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\") " Dec 11 18:32:12 crc kubenswrapper[4877]: I1211 18:32:12.685926 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sftrx\" (UniqueName: \"kubernetes.io/projected/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-kube-api-access-sftrx\") pod \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\" (UID: \"f6dc6f9d-c3d7-45da-98ca-e00538c9680e\") " Dec 11 18:32:12 crc kubenswrapper[4877]: I1211 18:32:12.698703 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-kube-api-access-sftrx" (OuterVolumeSpecName: "kube-api-access-sftrx") pod "f6dc6f9d-c3d7-45da-98ca-e00538c9680e" (UID: "f6dc6f9d-c3d7-45da-98ca-e00538c9680e"). InnerVolumeSpecName "kube-api-access-sftrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:32:12 crc kubenswrapper[4877]: I1211 18:32:12.719576 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f6dc6f9d-c3d7-45da-98ca-e00538c9680e" (UID: "f6dc6f9d-c3d7-45da-98ca-e00538c9680e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:32:12 crc kubenswrapper[4877]: I1211 18:32:12.739925 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-inventory" (OuterVolumeSpecName: "inventory") pod "f6dc6f9d-c3d7-45da-98ca-e00538c9680e" (UID: "f6dc6f9d-c3d7-45da-98ca-e00538c9680e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:32:12 crc kubenswrapper[4877]: I1211 18:32:12.788480 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:32:12 crc kubenswrapper[4877]: I1211 18:32:12.788539 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sftrx\" (UniqueName: \"kubernetes.io/projected/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-kube-api-access-sftrx\") on node \"crc\" DevicePath \"\"" Dec 11 18:32:12 crc kubenswrapper[4877]: I1211 18:32:12.788562 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6dc6f9d-c3d7-45da-98ca-e00538c9680e-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:32:12 crc kubenswrapper[4877]: I1211 18:32:12.939853 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" event={"ID":"f6dc6f9d-c3d7-45da-98ca-e00538c9680e","Type":"ContainerDied","Data":"b646362ece38ec6316859d49661a808e5299ff75e58af54570ea58ff36e32523"} Dec 11 18:32:12 crc kubenswrapper[4877]: I1211 18:32:12.939961 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b646362ece38ec6316859d49661a808e5299ff75e58af54570ea58ff36e32523" Dec 11 18:32:12 crc kubenswrapper[4877]: I1211 18:32:12.940062 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b4lh7" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.056278 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs"] Dec 11 18:32:13 crc kubenswrapper[4877]: E1211 18:32:13.056758 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6dc6f9d-c3d7-45da-98ca-e00538c9680e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.056774 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6dc6f9d-c3d7-45da-98ca-e00538c9680e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.056946 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6dc6f9d-c3d7-45da-98ca-e00538c9680e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.057640 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.061744 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.061764 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.061881 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.062460 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.093175 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs"] Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.097297 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wstnl\" (UniqueName: \"kubernetes.io/projected/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-kube-api-access-wstnl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs\" (UID: \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.097439 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs\" (UID: \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.097655 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs\" (UID: \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.201359 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wstnl\" (UniqueName: \"kubernetes.io/projected/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-kube-api-access-wstnl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs\" (UID: \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.201563 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs\" (UID: \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.201610 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs\" (UID: \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.207847 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs\" (UID: \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.208004 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs\" (UID: \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.220775 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wstnl\" (UniqueName: \"kubernetes.io/projected/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-kube-api-access-wstnl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs\" (UID: \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.386023 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.852737 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs"] Dec 11 18:32:13 crc kubenswrapper[4877]: W1211 18:32:13.853590 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4fd09ef_4694_42ad_b7fa_17a721fec8f3.slice/crio-d5d622c885f04c23cbb080b34cbbdb40a43703d8638afe78e994f91b3b608e33 WatchSource:0}: Error finding container d5d622c885f04c23cbb080b34cbbdb40a43703d8638afe78e994f91b3b608e33: Status 404 returned error can't find the container with id d5d622c885f04c23cbb080b34cbbdb40a43703d8638afe78e994f91b3b608e33 Dec 11 18:32:13 crc kubenswrapper[4877]: I1211 18:32:13.953098 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" event={"ID":"c4fd09ef-4694-42ad-b7fa-17a721fec8f3","Type":"ContainerStarted","Data":"d5d622c885f04c23cbb080b34cbbdb40a43703d8638afe78e994f91b3b608e33"} Dec 11 18:32:14 crc kubenswrapper[4877]: I1211 18:32:14.967441 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" event={"ID":"c4fd09ef-4694-42ad-b7fa-17a721fec8f3","Type":"ContainerStarted","Data":"7c0d205d80de4d3d078e90c129e9a8d130dbef083067ebb11bedc9b3850bd2ea"} Dec 11 18:32:15 crc kubenswrapper[4877]: I1211 18:32:15.003599 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" podStartSLOduration=1.530940763 podStartE2EDuration="2.003563381s" podCreationTimestamp="2025-12-11 18:32:13 +0000 UTC" firstStartedPulling="2025-12-11 18:32:13.856526042 +0000 UTC m=+1894.882770086" lastFinishedPulling="2025-12-11 18:32:14.32914866 +0000 UTC m=+1895.355392704" observedRunningTime="2025-12-11 18:32:14.988745955 +0000 UTC m=+1896.014990009" watchObservedRunningTime="2025-12-11 18:32:15.003563381 +0000 UTC m=+1896.029807465" Dec 11 18:32:18 crc kubenswrapper[4877]: I1211 18:32:18.216882 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:32:19 crc kubenswrapper[4877]: I1211 18:32:19.045779 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"e540e4e31a0b63dc047c6020b5d13940da5103cf44c18566d0ff205c7b394c34"} Dec 11 18:32:26 crc kubenswrapper[4877]: I1211 18:32:26.155506 4877 generic.go:334] "Generic (PLEG): container finished" podID="c4fd09ef-4694-42ad-b7fa-17a721fec8f3" containerID="7c0d205d80de4d3d078e90c129e9a8d130dbef083067ebb11bedc9b3850bd2ea" exitCode=0 Dec 11 18:32:26 crc kubenswrapper[4877]: I1211 18:32:26.155597 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" event={"ID":"c4fd09ef-4694-42ad-b7fa-17a721fec8f3","Type":"ContainerDied","Data":"7c0d205d80de4d3d078e90c129e9a8d130dbef083067ebb11bedc9b3850bd2ea"} Dec 11 18:32:27 crc kubenswrapper[4877]: I1211 18:32:27.748952 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" Dec 11 18:32:27 crc kubenswrapper[4877]: I1211 18:32:27.889307 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wstnl\" (UniqueName: \"kubernetes.io/projected/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-kube-api-access-wstnl\") pod \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\" (UID: \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\") " Dec 11 18:32:27 crc kubenswrapper[4877]: I1211 18:32:27.889715 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-inventory\") pod \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\" (UID: \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\") " Dec 11 18:32:27 crc kubenswrapper[4877]: I1211 18:32:27.889804 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-ssh-key\") pod \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\" (UID: \"c4fd09ef-4694-42ad-b7fa-17a721fec8f3\") " Dec 11 18:32:27 crc kubenswrapper[4877]: I1211 18:32:27.899651 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-kube-api-access-wstnl" (OuterVolumeSpecName: "kube-api-access-wstnl") pod "c4fd09ef-4694-42ad-b7fa-17a721fec8f3" (UID: "c4fd09ef-4694-42ad-b7fa-17a721fec8f3"). InnerVolumeSpecName "kube-api-access-wstnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:32:27 crc kubenswrapper[4877]: I1211 18:32:27.929241 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-inventory" (OuterVolumeSpecName: "inventory") pod "c4fd09ef-4694-42ad-b7fa-17a721fec8f3" (UID: "c4fd09ef-4694-42ad-b7fa-17a721fec8f3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:32:27 crc kubenswrapper[4877]: I1211 18:32:27.950083 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c4fd09ef-4694-42ad-b7fa-17a721fec8f3" (UID: "c4fd09ef-4694-42ad-b7fa-17a721fec8f3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:32:27 crc kubenswrapper[4877]: I1211 18:32:27.993281 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:32:27 crc kubenswrapper[4877]: I1211 18:32:27.993321 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wstnl\" (UniqueName: \"kubernetes.io/projected/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-kube-api-access-wstnl\") on node \"crc\" DevicePath \"\"" Dec 11 18:32:27 crc kubenswrapper[4877]: I1211 18:32:27.993338 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4fd09ef-4694-42ad-b7fa-17a721fec8f3-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.202719 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" event={"ID":"c4fd09ef-4694-42ad-b7fa-17a721fec8f3","Type":"ContainerDied","Data":"d5d622c885f04c23cbb080b34cbbdb40a43703d8638afe78e994f91b3b608e33"} Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.202809 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d622c885f04c23cbb080b34cbbdb40a43703d8638afe78e994f91b3b608e33" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.203161 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.294388 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv"] Dec 11 18:32:28 crc kubenswrapper[4877]: E1211 18:32:28.295244 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fd09ef-4694-42ad-b7fa-17a721fec8f3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.295341 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fd09ef-4694-42ad-b7fa-17a721fec8f3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.295759 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4fd09ef-4694-42ad-b7fa-17a721fec8f3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.296744 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.299951 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.301217 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.306415 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.307046 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.307104 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.307151 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.308338 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.309721 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.317485 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv"] Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.408421 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.409456 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.409500 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.409562 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmbsk\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-kube-api-access-jmbsk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.409590 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.409685 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.409814 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.409903 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.409952 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.410117 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.410159 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.410186 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.410239 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.410310 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.512188 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.512249 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.512284 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.512324 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.512351 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.512404 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmbsk\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-kube-api-access-jmbsk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.512425 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.512481 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.512518 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.512550 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.512574 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.512608 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.512630 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.512649 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.520058 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.521045 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.521058 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.521808 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.522301 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.523043 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.523915 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.524335 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.524533 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.525108 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.526281 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.527937 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.529088 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.556814 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmbsk\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-kube-api-access-jmbsk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f27jv\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:28 crc kubenswrapper[4877]: I1211 18:32:28.640913 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:32:29 crc kubenswrapper[4877]: I1211 18:32:29.263249 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv"] Dec 11 18:32:29 crc kubenswrapper[4877]: W1211 18:32:29.269386 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dd6596b_9571_4ce0_8658_78d5f99fbb5a.slice/crio-318898a097aeab7071ced8aec4f9c269436b2fa5dd827d988b8ac08da209c718 WatchSource:0}: Error finding container 318898a097aeab7071ced8aec4f9c269436b2fa5dd827d988b8ac08da209c718: Status 404 returned error can't find the container with id 318898a097aeab7071ced8aec4f9c269436b2fa5dd827d988b8ac08da209c718 Dec 11 18:32:30 crc kubenswrapper[4877]: I1211 18:32:30.227153 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" event={"ID":"9dd6596b-9571-4ce0-8658-78d5f99fbb5a","Type":"ContainerStarted","Data":"835b47a023027cf4c23eba965cb087b6bcecd40e54f3e71c48ea8b7329baee42"} Dec 11 18:32:30 crc kubenswrapper[4877]: I1211 18:32:30.228151 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" event={"ID":"9dd6596b-9571-4ce0-8658-78d5f99fbb5a","Type":"ContainerStarted","Data":"318898a097aeab7071ced8aec4f9c269436b2fa5dd827d988b8ac08da209c718"} Dec 11 18:32:30 crc kubenswrapper[4877]: I1211 18:32:30.271703 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" podStartSLOduration=1.805573766 podStartE2EDuration="2.271675455s" podCreationTimestamp="2025-12-11 18:32:28 +0000 UTC" firstStartedPulling="2025-12-11 18:32:29.273190549 +0000 UTC m=+1910.299434593" lastFinishedPulling="2025-12-11 18:32:29.739292228 +0000 UTC m=+1910.765536282" observedRunningTime="2025-12-11 18:32:30.261203312 +0000 UTC m=+1911.287447376" watchObservedRunningTime="2025-12-11 18:32:30.271675455 +0000 UTC m=+1911.297919499" Dec 11 18:33:15 crc kubenswrapper[4877]: I1211 18:33:15.859719 4877 generic.go:334] "Generic (PLEG): container finished" podID="9dd6596b-9571-4ce0-8658-78d5f99fbb5a" containerID="835b47a023027cf4c23eba965cb087b6bcecd40e54f3e71c48ea8b7329baee42" exitCode=0 Dec 11 18:33:15 crc kubenswrapper[4877]: I1211 18:33:15.859816 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" event={"ID":"9dd6596b-9571-4ce0-8658-78d5f99fbb5a","Type":"ContainerDied","Data":"835b47a023027cf4c23eba965cb087b6bcecd40e54f3e71c48ea8b7329baee42"} Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.341283 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.448704 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-nova-combined-ca-bundle\") pod \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.448777 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmbsk\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-kube-api-access-jmbsk\") pod \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.448817 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-bootstrap-combined-ca-bundle\") pod \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.448847 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-libvirt-combined-ca-bundle\") pod \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.448882 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-neutron-metadata-combined-ca-bundle\") pod \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.448902 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.448928 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.448969 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-repo-setup-combined-ca-bundle\") pod \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.448986 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-inventory\") pod \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.449012 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.449056 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.449188 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-ovn-combined-ca-bundle\") pod \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.449353 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-ssh-key\") pod \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.449425 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-telemetry-combined-ca-bundle\") pod \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\" (UID: \"9dd6596b-9571-4ce0-8658-78d5f99fbb5a\") " Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.454949 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9dd6596b-9571-4ce0-8658-78d5f99fbb5a" (UID: "9dd6596b-9571-4ce0-8658-78d5f99fbb5a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.456610 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "9dd6596b-9571-4ce0-8658-78d5f99fbb5a" (UID: "9dd6596b-9571-4ce0-8658-78d5f99fbb5a"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.456711 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9dd6596b-9571-4ce0-8658-78d5f99fbb5a" (UID: "9dd6596b-9571-4ce0-8658-78d5f99fbb5a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.456854 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9dd6596b-9571-4ce0-8658-78d5f99fbb5a" (UID: "9dd6596b-9571-4ce0-8658-78d5f99fbb5a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.458336 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "9dd6596b-9571-4ce0-8658-78d5f99fbb5a" (UID: "9dd6596b-9571-4ce0-8658-78d5f99fbb5a"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.458444 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-kube-api-access-jmbsk" (OuterVolumeSpecName: "kube-api-access-jmbsk") pod "9dd6596b-9571-4ce0-8658-78d5f99fbb5a" (UID: "9dd6596b-9571-4ce0-8658-78d5f99fbb5a"). InnerVolumeSpecName "kube-api-access-jmbsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.459635 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "9dd6596b-9571-4ce0-8658-78d5f99fbb5a" (UID: "9dd6596b-9571-4ce0-8658-78d5f99fbb5a"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.459762 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9dd6596b-9571-4ce0-8658-78d5f99fbb5a" (UID: "9dd6596b-9571-4ce0-8658-78d5f99fbb5a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.460207 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9dd6596b-9571-4ce0-8658-78d5f99fbb5a" (UID: "9dd6596b-9571-4ce0-8658-78d5f99fbb5a"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.460317 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9dd6596b-9571-4ce0-8658-78d5f99fbb5a" (UID: "9dd6596b-9571-4ce0-8658-78d5f99fbb5a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.460644 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "9dd6596b-9571-4ce0-8658-78d5f99fbb5a" (UID: "9dd6596b-9571-4ce0-8658-78d5f99fbb5a"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.462837 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9dd6596b-9571-4ce0-8658-78d5f99fbb5a" (UID: "9dd6596b-9571-4ce0-8658-78d5f99fbb5a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.486742 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-inventory" (OuterVolumeSpecName: "inventory") pod "9dd6596b-9571-4ce0-8658-78d5f99fbb5a" (UID: "9dd6596b-9571-4ce0-8658-78d5f99fbb5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.510611 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9dd6596b-9571-4ce0-8658-78d5f99fbb5a" (UID: "9dd6596b-9571-4ce0-8658-78d5f99fbb5a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.552252 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.552293 4877 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.552308 4877 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.552317 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmbsk\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-kube-api-access-jmbsk\") on node \"crc\" DevicePath \"\"" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.552326 4877 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.552338 4877 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.552347 4877 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.552357 4877 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.552383 4877 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.552397 4877 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.552407 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.552416 4877 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.552427 4877 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.552438 4877 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd6596b-9571-4ce0-8658-78d5f99fbb5a-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.884925 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" event={"ID":"9dd6596b-9571-4ce0-8658-78d5f99fbb5a","Type":"ContainerDied","Data":"318898a097aeab7071ced8aec4f9c269436b2fa5dd827d988b8ac08da209c718"} Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.884983 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="318898a097aeab7071ced8aec4f9c269436b2fa5dd827d988b8ac08da209c718" Dec 11 18:33:17 crc kubenswrapper[4877]: I1211 18:33:17.885020 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f27jv" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.024530 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb"] Dec 11 18:33:18 crc kubenswrapper[4877]: E1211 18:33:18.025068 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dd6596b-9571-4ce0-8658-78d5f99fbb5a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.025092 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dd6596b-9571-4ce0-8658-78d5f99fbb5a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.025340 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dd6596b-9571-4ce0-8658-78d5f99fbb5a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.026336 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.030079 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.030208 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.030720 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.030929 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.031159 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.050474 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb"] Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.170035 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qr4n\" (UniqueName: \"kubernetes.io/projected/3346dff0-5931-4f19-817b-bc38011e3718-kube-api-access-6qr4n\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.170640 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.170696 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.170719 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.170787 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3346dff0-5931-4f19-817b-bc38011e3718-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.274738 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3346dff0-5931-4f19-817b-bc38011e3718-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.274850 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qr4n\" (UniqueName: \"kubernetes.io/projected/3346dff0-5931-4f19-817b-bc38011e3718-kube-api-access-6qr4n\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.275021 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.275105 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.275142 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.276760 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3346dff0-5931-4f19-817b-bc38011e3718-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.284263 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.285073 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.295795 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.301823 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qr4n\" (UniqueName: \"kubernetes.io/projected/3346dff0-5931-4f19-817b-bc38011e3718-kube-api-access-6qr4n\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x5zzb\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:18 crc kubenswrapper[4877]: I1211 18:33:18.383349 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:33:19 crc kubenswrapper[4877]: I1211 18:33:19.018064 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb"] Dec 11 18:33:19 crc kubenswrapper[4877]: W1211 18:33:19.021524 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3346dff0_5931_4f19_817b_bc38011e3718.slice/crio-f1d9db1034c9546990748b25cd088a29a3eee5b48e5bdf53677b032001680de0 WatchSource:0}: Error finding container f1d9db1034c9546990748b25cd088a29a3eee5b48e5bdf53677b032001680de0: Status 404 returned error can't find the container with id f1d9db1034c9546990748b25cd088a29a3eee5b48e5bdf53677b032001680de0 Dec 11 18:33:19 crc kubenswrapper[4877]: I1211 18:33:19.911539 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" event={"ID":"3346dff0-5931-4f19-817b-bc38011e3718","Type":"ContainerStarted","Data":"f1d9db1034c9546990748b25cd088a29a3eee5b48e5bdf53677b032001680de0"} Dec 11 18:33:19 crc kubenswrapper[4877]: I1211 18:33:19.915255 4877 generic.go:334] "Generic (PLEG): container finished" podID="53a860ae-4169-4f47-8ba7-032c96b4be3a" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" exitCode=1 Dec 11 18:33:19 crc kubenswrapper[4877]: I1211 18:33:19.915348 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerDied","Data":"1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf"} Dec 11 18:33:19 crc kubenswrapper[4877]: I1211 18:33:19.915576 4877 scope.go:117] "RemoveContainer" containerID="39ed2e458ba003d9e4eb28d8ea9e69bfdefe36cd0300a21c857ae42a563608c7" Dec 11 18:33:19 crc kubenswrapper[4877]: I1211 18:33:19.917515 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:33:19 crc kubenswrapper[4877]: E1211 18:33:19.917871 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:33:21 crc kubenswrapper[4877]: I1211 18:33:21.137483 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:33:21 crc kubenswrapper[4877]: I1211 18:33:21.138151 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:33:21 crc kubenswrapper[4877]: I1211 18:33:21.139325 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:33:21 crc kubenswrapper[4877]: E1211 18:33:21.139838 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:33:21 crc kubenswrapper[4877]: I1211 18:33:21.949492 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" event={"ID":"3346dff0-5931-4f19-817b-bc38011e3718","Type":"ContainerStarted","Data":"b71b8360f8261f5af86e764672fe704aed4f7134098df3402becf4874a14acc0"} Dec 11 18:33:21 crc kubenswrapper[4877]: I1211 18:33:21.986707 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" podStartSLOduration=2.352433623 podStartE2EDuration="3.986679756s" podCreationTimestamp="2025-12-11 18:33:18 +0000 UTC" firstStartedPulling="2025-12-11 18:33:19.025599468 +0000 UTC m=+1960.051843562" lastFinishedPulling="2025-12-11 18:33:20.659845631 +0000 UTC m=+1961.686089695" observedRunningTime="2025-12-11 18:33:21.977543383 +0000 UTC m=+1963.003787467" watchObservedRunningTime="2025-12-11 18:33:21.986679756 +0000 UTC m=+1963.012923810" Dec 11 18:33:32 crc kubenswrapper[4877]: I1211 18:33:32.216030 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:33:32 crc kubenswrapper[4877]: E1211 18:33:32.217005 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:33:43 crc kubenswrapper[4877]: I1211 18:33:43.227306 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:33:43 crc kubenswrapper[4877]: E1211 18:33:43.228113 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:33:58 crc kubenswrapper[4877]: I1211 18:33:58.216095 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:33:58 crc kubenswrapper[4877]: E1211 18:33:58.217404 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:34:13 crc kubenswrapper[4877]: I1211 18:34:13.216455 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:34:13 crc kubenswrapper[4877]: E1211 18:34:13.217693 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:34:26 crc kubenswrapper[4877]: I1211 18:34:26.216470 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:34:26 crc kubenswrapper[4877]: E1211 18:34:26.217423 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:34:36 crc kubenswrapper[4877]: I1211 18:34:36.952812 4877 generic.go:334] "Generic (PLEG): container finished" podID="3346dff0-5931-4f19-817b-bc38011e3718" containerID="b71b8360f8261f5af86e764672fe704aed4f7134098df3402becf4874a14acc0" exitCode=0 Dec 11 18:34:36 crc kubenswrapper[4877]: I1211 18:34:36.952927 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" event={"ID":"3346dff0-5931-4f19-817b-bc38011e3718","Type":"ContainerDied","Data":"b71b8360f8261f5af86e764672fe704aed4f7134098df3402becf4874a14acc0"} Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.543119 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.639510 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qr4n\" (UniqueName: \"kubernetes.io/projected/3346dff0-5931-4f19-817b-bc38011e3718-kube-api-access-6qr4n\") pod \"3346dff0-5931-4f19-817b-bc38011e3718\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.639575 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3346dff0-5931-4f19-817b-bc38011e3718-ovncontroller-config-0\") pod \"3346dff0-5931-4f19-817b-bc38011e3718\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.639717 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-inventory\") pod \"3346dff0-5931-4f19-817b-bc38011e3718\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.639793 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-ovn-combined-ca-bundle\") pod \"3346dff0-5931-4f19-817b-bc38011e3718\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.639868 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-ssh-key\") pod \"3346dff0-5931-4f19-817b-bc38011e3718\" (UID: \"3346dff0-5931-4f19-817b-bc38011e3718\") " Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.646105 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3346dff0-5931-4f19-817b-bc38011e3718" (UID: "3346dff0-5931-4f19-817b-bc38011e3718"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.655811 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3346dff0-5931-4f19-817b-bc38011e3718-kube-api-access-6qr4n" (OuterVolumeSpecName: "kube-api-access-6qr4n") pod "3346dff0-5931-4f19-817b-bc38011e3718" (UID: "3346dff0-5931-4f19-817b-bc38011e3718"). InnerVolumeSpecName "kube-api-access-6qr4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.685777 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3346dff0-5931-4f19-817b-bc38011e3718" (UID: "3346dff0-5931-4f19-817b-bc38011e3718"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.701116 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3346dff0-5931-4f19-817b-bc38011e3718-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "3346dff0-5931-4f19-817b-bc38011e3718" (UID: "3346dff0-5931-4f19-817b-bc38011e3718"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.709205 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-inventory" (OuterVolumeSpecName: "inventory") pod "3346dff0-5931-4f19-817b-bc38011e3718" (UID: "3346dff0-5931-4f19-817b-bc38011e3718"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.742655 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.742696 4877 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.742712 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3346dff0-5931-4f19-817b-bc38011e3718-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.742728 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qr4n\" (UniqueName: \"kubernetes.io/projected/3346dff0-5931-4f19-817b-bc38011e3718-kube-api-access-6qr4n\") on node \"crc\" DevicePath \"\"" Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.742741 4877 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3346dff0-5931-4f19-817b-bc38011e3718-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.975079 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" event={"ID":"3346dff0-5931-4f19-817b-bc38011e3718","Type":"ContainerDied","Data":"f1d9db1034c9546990748b25cd088a29a3eee5b48e5bdf53677b032001680de0"} Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.975499 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1d9db1034c9546990748b25cd088a29a3eee5b48e5bdf53677b032001680de0" Dec 11 18:34:38 crc kubenswrapper[4877]: I1211 18:34:38.975150 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x5zzb" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.127674 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d"] Dec 11 18:34:39 crc kubenswrapper[4877]: E1211 18:34:39.128136 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3346dff0-5931-4f19-817b-bc38011e3718" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.128152 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="3346dff0-5931-4f19-817b-bc38011e3718" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.128397 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="3346dff0-5931-4f19-817b-bc38011e3718" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.129071 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.131823 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.132518 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.132643 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.133227 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.133284 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.133340 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.150263 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d"] Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.255469 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.255528 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfsdq\" (UniqueName: \"kubernetes.io/projected/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-kube-api-access-gfsdq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.255588 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.255617 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.255640 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.256119 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.358254 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.358495 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.358578 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfsdq\" (UniqueName: \"kubernetes.io/projected/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-kube-api-access-gfsdq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.358718 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.358784 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.358843 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.361537 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.361817 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.361880 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.362595 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.366876 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.375622 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.376426 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.377229 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.377825 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.391922 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfsdq\" (UniqueName: \"kubernetes.io/projected/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-kube-api-access-gfsdq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.480437 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:34:39 crc kubenswrapper[4877]: I1211 18:34:39.486774 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:34:40 crc kubenswrapper[4877]: I1211 18:34:40.132568 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d"] Dec 11 18:34:40 crc kubenswrapper[4877]: I1211 18:34:40.654243 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:34:41 crc kubenswrapper[4877]: I1211 18:34:41.000321 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" event={"ID":"cb1dfedd-c524-4375-9b91-d9f87e34e2d0","Type":"ContainerStarted","Data":"942360cea0df2b5e6f31146481724a8479f12a85305ec775956468993a96e8ee"} Dec 11 18:34:41 crc kubenswrapper[4877]: I1211 18:34:41.000675 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" event={"ID":"cb1dfedd-c524-4375-9b91-d9f87e34e2d0","Type":"ContainerStarted","Data":"9b1d6b7104d2bc0b37eb2ae0a44a53b7cca5e5b23b0b9ed7f559d04234c60524"} Dec 11 18:34:41 crc kubenswrapper[4877]: I1211 18:34:41.020340 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" podStartSLOduration=1.503674818 podStartE2EDuration="2.020279519s" podCreationTimestamp="2025-12-11 18:34:39 +0000 UTC" firstStartedPulling="2025-12-11 18:34:40.134486769 +0000 UTC m=+2041.160730813" lastFinishedPulling="2025-12-11 18:34:40.65109147 +0000 UTC m=+2041.677335514" observedRunningTime="2025-12-11 18:34:41.01958487 +0000 UTC m=+2042.045828914" watchObservedRunningTime="2025-12-11 18:34:41.020279519 +0000 UTC m=+2042.046523603" Dec 11 18:34:41 crc kubenswrapper[4877]: I1211 18:34:41.216838 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:34:41 crc kubenswrapper[4877]: E1211 18:34:41.217308 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:34:46 crc kubenswrapper[4877]: I1211 18:34:46.637482 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:34:46 crc kubenswrapper[4877]: I1211 18:34:46.638477 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:34:54 crc kubenswrapper[4877]: I1211 18:34:54.216016 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:34:54 crc kubenswrapper[4877]: E1211 18:34:54.217264 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:35:07 crc kubenswrapper[4877]: I1211 18:35:07.215937 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:35:07 crc kubenswrapper[4877]: E1211 18:35:07.217167 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:35:16 crc kubenswrapper[4877]: I1211 18:35:16.637445 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:35:16 crc kubenswrapper[4877]: I1211 18:35:16.638238 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:35:18 crc kubenswrapper[4877]: I1211 18:35:18.215505 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:35:18 crc kubenswrapper[4877]: E1211 18:35:18.216698 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:35:32 crc kubenswrapper[4877]: I1211 18:35:32.215353 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:35:32 crc kubenswrapper[4877]: E1211 18:35:32.216253 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:35:34 crc kubenswrapper[4877]: I1211 18:35:34.652907 4877 generic.go:334] "Generic (PLEG): container finished" podID="cb1dfedd-c524-4375-9b91-d9f87e34e2d0" containerID="942360cea0df2b5e6f31146481724a8479f12a85305ec775956468993a96e8ee" exitCode=0 Dec 11 18:35:34 crc kubenswrapper[4877]: I1211 18:35:34.652998 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" event={"ID":"cb1dfedd-c524-4375-9b91-d9f87e34e2d0","Type":"ContainerDied","Data":"942360cea0df2b5e6f31146481724a8479f12a85305ec775956468993a96e8ee"} Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.153658 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.328703 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-inventory\") pod \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.328833 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-nova-metadata-neutron-config-0\") pod \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.329050 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.329257 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfsdq\" (UniqueName: \"kubernetes.io/projected/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-kube-api-access-gfsdq\") pod \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.329491 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-ssh-key\") pod \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.329541 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-neutron-metadata-combined-ca-bundle\") pod \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\" (UID: \"cb1dfedd-c524-4375-9b91-d9f87e34e2d0\") " Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.336039 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-kube-api-access-gfsdq" (OuterVolumeSpecName: "kube-api-access-gfsdq") pod "cb1dfedd-c524-4375-9b91-d9f87e34e2d0" (UID: "cb1dfedd-c524-4375-9b91-d9f87e34e2d0"). InnerVolumeSpecName "kube-api-access-gfsdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.337302 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "cb1dfedd-c524-4375-9b91-d9f87e34e2d0" (UID: "cb1dfedd-c524-4375-9b91-d9f87e34e2d0"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.360155 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cb1dfedd-c524-4375-9b91-d9f87e34e2d0" (UID: "cb1dfedd-c524-4375-9b91-d9f87e34e2d0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.364879 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "cb1dfedd-c524-4375-9b91-d9f87e34e2d0" (UID: "cb1dfedd-c524-4375-9b91-d9f87e34e2d0"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.365713 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-inventory" (OuterVolumeSpecName: "inventory") pod "cb1dfedd-c524-4375-9b91-d9f87e34e2d0" (UID: "cb1dfedd-c524-4375-9b91-d9f87e34e2d0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.379297 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "cb1dfedd-c524-4375-9b91-d9f87e34e2d0" (UID: "cb1dfedd-c524-4375-9b91-d9f87e34e2d0"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.432389 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.432431 4877 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.432442 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.432452 4877 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.432462 4877 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.432473 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfsdq\" (UniqueName: \"kubernetes.io/projected/cb1dfedd-c524-4375-9b91-d9f87e34e2d0-kube-api-access-gfsdq\") on node \"crc\" DevicePath \"\"" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.679909 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" event={"ID":"cb1dfedd-c524-4375-9b91-d9f87e34e2d0","Type":"ContainerDied","Data":"9b1d6b7104d2bc0b37eb2ae0a44a53b7cca5e5b23b0b9ed7f559d04234c60524"} Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.680275 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1d6b7104d2bc0b37eb2ae0a44a53b7cca5e5b23b0b9ed7f559d04234c60524" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.679987 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.839732 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg"] Dec 11 18:35:36 crc kubenswrapper[4877]: E1211 18:35:36.840445 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb1dfedd-c524-4375-9b91-d9f87e34e2d0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.840478 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb1dfedd-c524-4375-9b91-d9f87e34e2d0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.840843 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb1dfedd-c524-4375-9b91-d9f87e34e2d0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.842124 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.845269 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.846208 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.846462 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.846689 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.846875 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.851417 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg"] Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.954201 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.954580 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.954778 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.954964 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:36 crc kubenswrapper[4877]: I1211 18:35:36.955080 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p5zx\" (UniqueName: \"kubernetes.io/projected/24092f15-2f1a-441e-a0b9-8bf295b95bd0-kube-api-access-7p5zx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:37 crc kubenswrapper[4877]: I1211 18:35:37.056929 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:37 crc kubenswrapper[4877]: I1211 18:35:37.057025 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p5zx\" (UniqueName: \"kubernetes.io/projected/24092f15-2f1a-441e-a0b9-8bf295b95bd0-kube-api-access-7p5zx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:37 crc kubenswrapper[4877]: I1211 18:35:37.057206 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:37 crc kubenswrapper[4877]: I1211 18:35:37.057281 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:37 crc kubenswrapper[4877]: I1211 18:35:37.057353 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:37 crc kubenswrapper[4877]: I1211 18:35:37.063469 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:37 crc kubenswrapper[4877]: I1211 18:35:37.064201 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:37 crc kubenswrapper[4877]: I1211 18:35:37.066124 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:37 crc kubenswrapper[4877]: I1211 18:35:37.070064 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:37 crc kubenswrapper[4877]: I1211 18:35:37.080044 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p5zx\" (UniqueName: \"kubernetes.io/projected/24092f15-2f1a-441e-a0b9-8bf295b95bd0-kube-api-access-7p5zx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:37 crc kubenswrapper[4877]: I1211 18:35:37.173735 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:35:37 crc kubenswrapper[4877]: I1211 18:35:37.760225 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg"] Dec 11 18:35:38 crc kubenswrapper[4877]: I1211 18:35:38.704984 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" event={"ID":"24092f15-2f1a-441e-a0b9-8bf295b95bd0","Type":"ContainerStarted","Data":"09473fd3563b3f5020e2f30fb0b682d70c7d8ba1eb4d3ec2731bf0ad6c62f2f4"} Dec 11 18:35:38 crc kubenswrapper[4877]: I1211 18:35:38.705323 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" event={"ID":"24092f15-2f1a-441e-a0b9-8bf295b95bd0","Type":"ContainerStarted","Data":"7616e3e9433ae47a961a12f2ed32e228df6fb1ba37bcc76b07184a042ffc43f1"} Dec 11 18:35:38 crc kubenswrapper[4877]: I1211 18:35:38.725002 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" podStartSLOduration=2.257184182 podStartE2EDuration="2.724979148s" podCreationTimestamp="2025-12-11 18:35:36 +0000 UTC" firstStartedPulling="2025-12-11 18:35:37.768661137 +0000 UTC m=+2098.794905181" lastFinishedPulling="2025-12-11 18:35:38.236456063 +0000 UTC m=+2099.262700147" observedRunningTime="2025-12-11 18:35:38.720740526 +0000 UTC m=+2099.746984590" watchObservedRunningTime="2025-12-11 18:35:38.724979148 +0000 UTC m=+2099.751223192" Dec 11 18:35:46 crc kubenswrapper[4877]: I1211 18:35:46.215195 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:35:46 crc kubenswrapper[4877]: E1211 18:35:46.216539 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:35:46 crc kubenswrapper[4877]: I1211 18:35:46.638540 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:35:46 crc kubenswrapper[4877]: I1211 18:35:46.638636 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:35:46 crc kubenswrapper[4877]: I1211 18:35:46.638701 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:35:46 crc kubenswrapper[4877]: I1211 18:35:46.639876 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e540e4e31a0b63dc047c6020b5d13940da5103cf44c18566d0ff205c7b394c34"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 18:35:46 crc kubenswrapper[4877]: I1211 18:35:46.639981 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://e540e4e31a0b63dc047c6020b5d13940da5103cf44c18566d0ff205c7b394c34" gracePeriod=600 Dec 11 18:35:46 crc kubenswrapper[4877]: I1211 18:35:46.810759 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="e540e4e31a0b63dc047c6020b5d13940da5103cf44c18566d0ff205c7b394c34" exitCode=0 Dec 11 18:35:46 crc kubenswrapper[4877]: I1211 18:35:46.811153 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"e540e4e31a0b63dc047c6020b5d13940da5103cf44c18566d0ff205c7b394c34"} Dec 11 18:35:46 crc kubenswrapper[4877]: I1211 18:35:46.811204 4877 scope.go:117] "RemoveContainer" containerID="388eaca8521c6f02c373d49466b66ea1553087354fbe26a293ba244ed2ddf543" Dec 11 18:35:47 crc kubenswrapper[4877]: I1211 18:35:47.835693 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627"} Dec 11 18:36:00 crc kubenswrapper[4877]: I1211 18:36:00.216104 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:36:00 crc kubenswrapper[4877]: I1211 18:36:00.994014 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerStarted","Data":"2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9"} Dec 11 18:36:00 crc kubenswrapper[4877]: I1211 18:36:00.994631 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:36:11 crc kubenswrapper[4877]: I1211 18:36:11.144194 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:38:15 crc kubenswrapper[4877]: I1211 18:38:15.502187 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-flgx8"] Dec 11 18:38:15 crc kubenswrapper[4877]: I1211 18:38:15.506445 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:15 crc kubenswrapper[4877]: I1211 18:38:15.513880 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-flgx8"] Dec 11 18:38:15 crc kubenswrapper[4877]: I1211 18:38:15.629307 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxlwc\" (UniqueName: \"kubernetes.io/projected/5f7dd930-364b-4ffd-9952-8ed9da723785-kube-api-access-zxlwc\") pod \"redhat-marketplace-flgx8\" (UID: \"5f7dd930-364b-4ffd-9952-8ed9da723785\") " pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:15 crc kubenswrapper[4877]: I1211 18:38:15.629345 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7dd930-364b-4ffd-9952-8ed9da723785-utilities\") pod \"redhat-marketplace-flgx8\" (UID: \"5f7dd930-364b-4ffd-9952-8ed9da723785\") " pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:15 crc kubenswrapper[4877]: I1211 18:38:15.629384 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7dd930-364b-4ffd-9952-8ed9da723785-catalog-content\") pod \"redhat-marketplace-flgx8\" (UID: \"5f7dd930-364b-4ffd-9952-8ed9da723785\") " pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:15 crc kubenswrapper[4877]: I1211 18:38:15.732683 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxlwc\" (UniqueName: \"kubernetes.io/projected/5f7dd930-364b-4ffd-9952-8ed9da723785-kube-api-access-zxlwc\") pod \"redhat-marketplace-flgx8\" (UID: \"5f7dd930-364b-4ffd-9952-8ed9da723785\") " pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:15 crc kubenswrapper[4877]: I1211 18:38:15.733157 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7dd930-364b-4ffd-9952-8ed9da723785-utilities\") pod \"redhat-marketplace-flgx8\" (UID: \"5f7dd930-364b-4ffd-9952-8ed9da723785\") " pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:15 crc kubenswrapper[4877]: I1211 18:38:15.733208 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7dd930-364b-4ffd-9952-8ed9da723785-catalog-content\") pod \"redhat-marketplace-flgx8\" (UID: \"5f7dd930-364b-4ffd-9952-8ed9da723785\") " pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:15 crc kubenswrapper[4877]: I1211 18:38:15.733759 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7dd930-364b-4ffd-9952-8ed9da723785-utilities\") pod \"redhat-marketplace-flgx8\" (UID: \"5f7dd930-364b-4ffd-9952-8ed9da723785\") " pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:15 crc kubenswrapper[4877]: I1211 18:38:15.734010 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7dd930-364b-4ffd-9952-8ed9da723785-catalog-content\") pod \"redhat-marketplace-flgx8\" (UID: \"5f7dd930-364b-4ffd-9952-8ed9da723785\") " pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:15 crc kubenswrapper[4877]: I1211 18:38:15.757491 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxlwc\" (UniqueName: \"kubernetes.io/projected/5f7dd930-364b-4ffd-9952-8ed9da723785-kube-api-access-zxlwc\") pod \"redhat-marketplace-flgx8\" (UID: \"5f7dd930-364b-4ffd-9952-8ed9da723785\") " pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:15 crc kubenswrapper[4877]: I1211 18:38:15.830131 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:16 crc kubenswrapper[4877]: I1211 18:38:16.314948 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-flgx8"] Dec 11 18:38:16 crc kubenswrapper[4877]: I1211 18:38:16.638167 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:38:16 crc kubenswrapper[4877]: I1211 18:38:16.638801 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:38:16 crc kubenswrapper[4877]: I1211 18:38:16.653428 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flgx8" event={"ID":"5f7dd930-364b-4ffd-9952-8ed9da723785","Type":"ContainerDied","Data":"0b9a9317dfbcdd32ef87bb237552854ce7b77792f0aef9dbc32a8c12e58d2742"} Dec 11 18:38:16 crc kubenswrapper[4877]: I1211 18:38:16.652845 4877 generic.go:334] "Generic (PLEG): container finished" podID="5f7dd930-364b-4ffd-9952-8ed9da723785" containerID="0b9a9317dfbcdd32ef87bb237552854ce7b77792f0aef9dbc32a8c12e58d2742" exitCode=0 Dec 11 18:38:16 crc kubenswrapper[4877]: I1211 18:38:16.654507 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flgx8" event={"ID":"5f7dd930-364b-4ffd-9952-8ed9da723785","Type":"ContainerStarted","Data":"1ecd4a5f2e429c2bcc4c412046ba26c19e35bbc3ff617ba560de514c21c7e619"} Dec 11 18:38:16 crc kubenswrapper[4877]: I1211 18:38:16.656037 4877 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 18:38:18 crc kubenswrapper[4877]: E1211 18:38:18.245619 4877 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f7dd930_364b_4ffd_9952_8ed9da723785.slice/crio-ac603152fb613b61fc8fb8a7c7f56de52a1f4e4109ecb7d437d2ef32880681df.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f7dd930_364b_4ffd_9952_8ed9da723785.slice/crio-conmon-ac603152fb613b61fc8fb8a7c7f56de52a1f4e4109ecb7d437d2ef32880681df.scope\": RecentStats: unable to find data in memory cache]" Dec 11 18:38:18 crc kubenswrapper[4877]: I1211 18:38:18.686666 4877 generic.go:334] "Generic (PLEG): container finished" podID="5f7dd930-364b-4ffd-9952-8ed9da723785" containerID="ac603152fb613b61fc8fb8a7c7f56de52a1f4e4109ecb7d437d2ef32880681df" exitCode=0 Dec 11 18:38:18 crc kubenswrapper[4877]: I1211 18:38:18.687072 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flgx8" event={"ID":"5f7dd930-364b-4ffd-9952-8ed9da723785","Type":"ContainerDied","Data":"ac603152fb613b61fc8fb8a7c7f56de52a1f4e4109ecb7d437d2ef32880681df"} Dec 11 18:38:19 crc kubenswrapper[4877]: I1211 18:38:19.706156 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flgx8" event={"ID":"5f7dd930-364b-4ffd-9952-8ed9da723785","Type":"ContainerStarted","Data":"87ab17f172c917375503f99c126ab3ada464286047df1543e0df5f38d47c0c87"} Dec 11 18:38:19 crc kubenswrapper[4877]: I1211 18:38:19.741199 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-flgx8" podStartSLOduration=2.119166641 podStartE2EDuration="4.741180709s" podCreationTimestamp="2025-12-11 18:38:15 +0000 UTC" firstStartedPulling="2025-12-11 18:38:16.655511798 +0000 UTC m=+2257.681755882" lastFinishedPulling="2025-12-11 18:38:19.277525886 +0000 UTC m=+2260.303769950" observedRunningTime="2025-12-11 18:38:19.733214422 +0000 UTC m=+2260.759458526" watchObservedRunningTime="2025-12-11 18:38:19.741180709 +0000 UTC m=+2260.767424763" Dec 11 18:38:25 crc kubenswrapper[4877]: I1211 18:38:25.830409 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:25 crc kubenswrapper[4877]: I1211 18:38:25.831149 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:25 crc kubenswrapper[4877]: I1211 18:38:25.955365 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:26 crc kubenswrapper[4877]: I1211 18:38:26.853613 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:26 crc kubenswrapper[4877]: I1211 18:38:26.923585 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-flgx8"] Dec 11 18:38:28 crc kubenswrapper[4877]: I1211 18:38:28.806093 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-flgx8" podUID="5f7dd930-364b-4ffd-9952-8ed9da723785" containerName="registry-server" containerID="cri-o://87ab17f172c917375503f99c126ab3ada464286047df1543e0df5f38d47c0c87" gracePeriod=2 Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.362811 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.447771 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxlwc\" (UniqueName: \"kubernetes.io/projected/5f7dd930-364b-4ffd-9952-8ed9da723785-kube-api-access-zxlwc\") pod \"5f7dd930-364b-4ffd-9952-8ed9da723785\" (UID: \"5f7dd930-364b-4ffd-9952-8ed9da723785\") " Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.447975 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7dd930-364b-4ffd-9952-8ed9da723785-utilities\") pod \"5f7dd930-364b-4ffd-9952-8ed9da723785\" (UID: \"5f7dd930-364b-4ffd-9952-8ed9da723785\") " Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.448010 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7dd930-364b-4ffd-9952-8ed9da723785-catalog-content\") pod \"5f7dd930-364b-4ffd-9952-8ed9da723785\" (UID: \"5f7dd930-364b-4ffd-9952-8ed9da723785\") " Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.448985 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f7dd930-364b-4ffd-9952-8ed9da723785-utilities" (OuterVolumeSpecName: "utilities") pod "5f7dd930-364b-4ffd-9952-8ed9da723785" (UID: "5f7dd930-364b-4ffd-9952-8ed9da723785"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.458091 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f7dd930-364b-4ffd-9952-8ed9da723785-kube-api-access-zxlwc" (OuterVolumeSpecName: "kube-api-access-zxlwc") pod "5f7dd930-364b-4ffd-9952-8ed9da723785" (UID: "5f7dd930-364b-4ffd-9952-8ed9da723785"). InnerVolumeSpecName "kube-api-access-zxlwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.493085 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f7dd930-364b-4ffd-9952-8ed9da723785-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f7dd930-364b-4ffd-9952-8ed9da723785" (UID: "5f7dd930-364b-4ffd-9952-8ed9da723785"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.550235 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7dd930-364b-4ffd-9952-8ed9da723785-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.550282 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7dd930-364b-4ffd-9952-8ed9da723785-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.550306 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxlwc\" (UniqueName: \"kubernetes.io/projected/5f7dd930-364b-4ffd-9952-8ed9da723785-kube-api-access-zxlwc\") on node \"crc\" DevicePath \"\"" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.818228 4877 generic.go:334] "Generic (PLEG): container finished" podID="5f7dd930-364b-4ffd-9952-8ed9da723785" containerID="87ab17f172c917375503f99c126ab3ada464286047df1543e0df5f38d47c0c87" exitCode=0 Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.818337 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-flgx8" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.818357 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flgx8" event={"ID":"5f7dd930-364b-4ffd-9952-8ed9da723785","Type":"ContainerDied","Data":"87ab17f172c917375503f99c126ab3ada464286047df1543e0df5f38d47c0c87"} Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.818866 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-flgx8" event={"ID":"5f7dd930-364b-4ffd-9952-8ed9da723785","Type":"ContainerDied","Data":"1ecd4a5f2e429c2bcc4c412046ba26c19e35bbc3ff617ba560de514c21c7e619"} Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.818935 4877 scope.go:117] "RemoveContainer" containerID="87ab17f172c917375503f99c126ab3ada464286047df1543e0df5f38d47c0c87" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.850836 4877 scope.go:117] "RemoveContainer" containerID="ac603152fb613b61fc8fb8a7c7f56de52a1f4e4109ecb7d437d2ef32880681df" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.895256 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-flgx8"] Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.906216 4877 scope.go:117] "RemoveContainer" containerID="0b9a9317dfbcdd32ef87bb237552854ce7b77792f0aef9dbc32a8c12e58d2742" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.911718 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-flgx8"] Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.970667 4877 scope.go:117] "RemoveContainer" containerID="87ab17f172c917375503f99c126ab3ada464286047df1543e0df5f38d47c0c87" Dec 11 18:38:29 crc kubenswrapper[4877]: E1211 18:38:29.971226 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ab17f172c917375503f99c126ab3ada464286047df1543e0df5f38d47c0c87\": container with ID starting with 87ab17f172c917375503f99c126ab3ada464286047df1543e0df5f38d47c0c87 not found: ID does not exist" containerID="87ab17f172c917375503f99c126ab3ada464286047df1543e0df5f38d47c0c87" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.971265 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ab17f172c917375503f99c126ab3ada464286047df1543e0df5f38d47c0c87"} err="failed to get container status \"87ab17f172c917375503f99c126ab3ada464286047df1543e0df5f38d47c0c87\": rpc error: code = NotFound desc = could not find container \"87ab17f172c917375503f99c126ab3ada464286047df1543e0df5f38d47c0c87\": container with ID starting with 87ab17f172c917375503f99c126ab3ada464286047df1543e0df5f38d47c0c87 not found: ID does not exist" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.971292 4877 scope.go:117] "RemoveContainer" containerID="ac603152fb613b61fc8fb8a7c7f56de52a1f4e4109ecb7d437d2ef32880681df" Dec 11 18:38:29 crc kubenswrapper[4877]: E1211 18:38:29.971765 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac603152fb613b61fc8fb8a7c7f56de52a1f4e4109ecb7d437d2ef32880681df\": container with ID starting with ac603152fb613b61fc8fb8a7c7f56de52a1f4e4109ecb7d437d2ef32880681df not found: ID does not exist" containerID="ac603152fb613b61fc8fb8a7c7f56de52a1f4e4109ecb7d437d2ef32880681df" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.971843 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac603152fb613b61fc8fb8a7c7f56de52a1f4e4109ecb7d437d2ef32880681df"} err="failed to get container status \"ac603152fb613b61fc8fb8a7c7f56de52a1f4e4109ecb7d437d2ef32880681df\": rpc error: code = NotFound desc = could not find container \"ac603152fb613b61fc8fb8a7c7f56de52a1f4e4109ecb7d437d2ef32880681df\": container with ID starting with ac603152fb613b61fc8fb8a7c7f56de52a1f4e4109ecb7d437d2ef32880681df not found: ID does not exist" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.971871 4877 scope.go:117] "RemoveContainer" containerID="0b9a9317dfbcdd32ef87bb237552854ce7b77792f0aef9dbc32a8c12e58d2742" Dec 11 18:38:29 crc kubenswrapper[4877]: E1211 18:38:29.972533 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b9a9317dfbcdd32ef87bb237552854ce7b77792f0aef9dbc32a8c12e58d2742\": container with ID starting with 0b9a9317dfbcdd32ef87bb237552854ce7b77792f0aef9dbc32a8c12e58d2742 not found: ID does not exist" containerID="0b9a9317dfbcdd32ef87bb237552854ce7b77792f0aef9dbc32a8c12e58d2742" Dec 11 18:38:29 crc kubenswrapper[4877]: I1211 18:38:29.972693 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b9a9317dfbcdd32ef87bb237552854ce7b77792f0aef9dbc32a8c12e58d2742"} err="failed to get container status \"0b9a9317dfbcdd32ef87bb237552854ce7b77792f0aef9dbc32a8c12e58d2742\": rpc error: code = NotFound desc = could not find container \"0b9a9317dfbcdd32ef87bb237552854ce7b77792f0aef9dbc32a8c12e58d2742\": container with ID starting with 0b9a9317dfbcdd32ef87bb237552854ce7b77792f0aef9dbc32a8c12e58d2742 not found: ID does not exist" Dec 11 18:38:31 crc kubenswrapper[4877]: I1211 18:38:31.234416 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f7dd930-364b-4ffd-9952-8ed9da723785" path="/var/lib/kubelet/pods/5f7dd930-364b-4ffd-9952-8ed9da723785/volumes" Dec 11 18:38:31 crc kubenswrapper[4877]: I1211 18:38:31.854066 4877 generic.go:334] "Generic (PLEG): container finished" podID="53a860ae-4169-4f47-8ba7-032c96b4be3a" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" exitCode=1 Dec 11 18:38:31 crc kubenswrapper[4877]: I1211 18:38:31.854141 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerDied","Data":"2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9"} Dec 11 18:38:31 crc kubenswrapper[4877]: I1211 18:38:31.854199 4877 scope.go:117] "RemoveContainer" containerID="1e806fa3e1b0356214d7054fbd9c7212b259b8f799eaf8acb5e0027ef06cdccf" Dec 11 18:38:31 crc kubenswrapper[4877]: I1211 18:38:31.855734 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:38:31 crc kubenswrapper[4877]: E1211 18:38:31.856411 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:38:41 crc kubenswrapper[4877]: I1211 18:38:41.137953 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:38:41 crc kubenswrapper[4877]: I1211 18:38:41.138656 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:38:41 crc kubenswrapper[4877]: I1211 18:38:41.139515 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:38:41 crc kubenswrapper[4877]: E1211 18:38:41.139817 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:38:46 crc kubenswrapper[4877]: I1211 18:38:46.638758 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:38:46 crc kubenswrapper[4877]: I1211 18:38:46.640886 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.410974 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8jltg"] Dec 11 18:38:49 crc kubenswrapper[4877]: E1211 18:38:49.411700 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f7dd930-364b-4ffd-9952-8ed9da723785" containerName="extract-content" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.411714 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f7dd930-364b-4ffd-9952-8ed9da723785" containerName="extract-content" Dec 11 18:38:49 crc kubenswrapper[4877]: E1211 18:38:49.411743 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f7dd930-364b-4ffd-9952-8ed9da723785" containerName="extract-utilities" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.411751 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f7dd930-364b-4ffd-9952-8ed9da723785" containerName="extract-utilities" Dec 11 18:38:49 crc kubenswrapper[4877]: E1211 18:38:49.411773 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f7dd930-364b-4ffd-9952-8ed9da723785" containerName="registry-server" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.411781 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f7dd930-364b-4ffd-9952-8ed9da723785" containerName="registry-server" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.412010 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f7dd930-364b-4ffd-9952-8ed9da723785" containerName="registry-server" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.414201 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.432542 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8jltg"] Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.505832 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5dead4-2f71-4693-83e2-202f305bca79-catalog-content\") pod \"community-operators-8jltg\" (UID: \"2c5dead4-2f71-4693-83e2-202f305bca79\") " pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.505906 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5dead4-2f71-4693-83e2-202f305bca79-utilities\") pod \"community-operators-8jltg\" (UID: \"2c5dead4-2f71-4693-83e2-202f305bca79\") " pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.506017 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncjj9\" (UniqueName: \"kubernetes.io/projected/2c5dead4-2f71-4693-83e2-202f305bca79-kube-api-access-ncjj9\") pod \"community-operators-8jltg\" (UID: \"2c5dead4-2f71-4693-83e2-202f305bca79\") " pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.607747 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5dead4-2f71-4693-83e2-202f305bca79-catalog-content\") pod \"community-operators-8jltg\" (UID: \"2c5dead4-2f71-4693-83e2-202f305bca79\") " pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.607820 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5dead4-2f71-4693-83e2-202f305bca79-utilities\") pod \"community-operators-8jltg\" (UID: \"2c5dead4-2f71-4693-83e2-202f305bca79\") " pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.607871 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncjj9\" (UniqueName: \"kubernetes.io/projected/2c5dead4-2f71-4693-83e2-202f305bca79-kube-api-access-ncjj9\") pod \"community-operators-8jltg\" (UID: \"2c5dead4-2f71-4693-83e2-202f305bca79\") " pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.608274 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5dead4-2f71-4693-83e2-202f305bca79-catalog-content\") pod \"community-operators-8jltg\" (UID: \"2c5dead4-2f71-4693-83e2-202f305bca79\") " pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.608417 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5dead4-2f71-4693-83e2-202f305bca79-utilities\") pod \"community-operators-8jltg\" (UID: \"2c5dead4-2f71-4693-83e2-202f305bca79\") " pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.636275 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncjj9\" (UniqueName: \"kubernetes.io/projected/2c5dead4-2f71-4693-83e2-202f305bca79-kube-api-access-ncjj9\") pod \"community-operators-8jltg\" (UID: \"2c5dead4-2f71-4693-83e2-202f305bca79\") " pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:38:49 crc kubenswrapper[4877]: I1211 18:38:49.743162 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:38:50 crc kubenswrapper[4877]: I1211 18:38:50.299825 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8jltg"] Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.074025 4877 generic.go:334] "Generic (PLEG): container finished" podID="2c5dead4-2f71-4693-83e2-202f305bca79" containerID="7fac2b86e37813a1936623d5bad457cd0d48a4e4710f01c00802d369556ae5df" exitCode=0 Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.074340 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jltg" event={"ID":"2c5dead4-2f71-4693-83e2-202f305bca79","Type":"ContainerDied","Data":"7fac2b86e37813a1936623d5bad457cd0d48a4e4710f01c00802d369556ae5df"} Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.074389 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jltg" event={"ID":"2c5dead4-2f71-4693-83e2-202f305bca79","Type":"ContainerStarted","Data":"6ceca4a3e80b06b61b5a7b9d6e652b468fbdf355b92b99facb8f5070580f155d"} Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.799067 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8fxfz"] Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.803290 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.822325 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8fxfz"] Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.858192 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766180e4-649c-4e2b-b36b-005aec7dc0ea-catalog-content\") pod \"redhat-operators-8fxfz\" (UID: \"766180e4-649c-4e2b-b36b-005aec7dc0ea\") " pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.858408 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zhcl\" (UniqueName: \"kubernetes.io/projected/766180e4-649c-4e2b-b36b-005aec7dc0ea-kube-api-access-4zhcl\") pod \"redhat-operators-8fxfz\" (UID: \"766180e4-649c-4e2b-b36b-005aec7dc0ea\") " pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.858489 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766180e4-649c-4e2b-b36b-005aec7dc0ea-utilities\") pod \"redhat-operators-8fxfz\" (UID: \"766180e4-649c-4e2b-b36b-005aec7dc0ea\") " pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.960569 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766180e4-649c-4e2b-b36b-005aec7dc0ea-catalog-content\") pod \"redhat-operators-8fxfz\" (UID: \"766180e4-649c-4e2b-b36b-005aec7dc0ea\") " pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.960693 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zhcl\" (UniqueName: \"kubernetes.io/projected/766180e4-649c-4e2b-b36b-005aec7dc0ea-kube-api-access-4zhcl\") pod \"redhat-operators-8fxfz\" (UID: \"766180e4-649c-4e2b-b36b-005aec7dc0ea\") " pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.960733 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766180e4-649c-4e2b-b36b-005aec7dc0ea-utilities\") pod \"redhat-operators-8fxfz\" (UID: \"766180e4-649c-4e2b-b36b-005aec7dc0ea\") " pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.961272 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766180e4-649c-4e2b-b36b-005aec7dc0ea-utilities\") pod \"redhat-operators-8fxfz\" (UID: \"766180e4-649c-4e2b-b36b-005aec7dc0ea\") " pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.961364 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766180e4-649c-4e2b-b36b-005aec7dc0ea-catalog-content\") pod \"redhat-operators-8fxfz\" (UID: \"766180e4-649c-4e2b-b36b-005aec7dc0ea\") " pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:38:51 crc kubenswrapper[4877]: I1211 18:38:51.987900 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zhcl\" (UniqueName: \"kubernetes.io/projected/766180e4-649c-4e2b-b36b-005aec7dc0ea-kube-api-access-4zhcl\") pod \"redhat-operators-8fxfz\" (UID: \"766180e4-649c-4e2b-b36b-005aec7dc0ea\") " pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:38:52 crc kubenswrapper[4877]: I1211 18:38:52.149105 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:38:52 crc kubenswrapper[4877]: I1211 18:38:52.679722 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8fxfz"] Dec 11 18:38:53 crc kubenswrapper[4877]: I1211 18:38:53.092307 4877 generic.go:334] "Generic (PLEG): container finished" podID="766180e4-649c-4e2b-b36b-005aec7dc0ea" containerID="fba04966ca6eb3a95a59747747b1df9fc634876de2dd73db3a2b774cab0c8be8" exitCode=0 Dec 11 18:38:53 crc kubenswrapper[4877]: I1211 18:38:53.092411 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fxfz" event={"ID":"766180e4-649c-4e2b-b36b-005aec7dc0ea","Type":"ContainerDied","Data":"fba04966ca6eb3a95a59747747b1df9fc634876de2dd73db3a2b774cab0c8be8"} Dec 11 18:38:53 crc kubenswrapper[4877]: I1211 18:38:53.092674 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fxfz" event={"ID":"766180e4-649c-4e2b-b36b-005aec7dc0ea","Type":"ContainerStarted","Data":"3934c8935261d6b8a203c492b7ca7efae22492385529aec782935fc8aae13889"} Dec 11 18:38:53 crc kubenswrapper[4877]: I1211 18:38:53.094671 4877 generic.go:334] "Generic (PLEG): container finished" podID="2c5dead4-2f71-4693-83e2-202f305bca79" containerID="83de1dc3c5c9020e6994c3bc8b5e3a4492b83fda12a09fe79c521e298ef7af93" exitCode=0 Dec 11 18:38:53 crc kubenswrapper[4877]: I1211 18:38:53.094701 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jltg" event={"ID":"2c5dead4-2f71-4693-83e2-202f305bca79","Type":"ContainerDied","Data":"83de1dc3c5c9020e6994c3bc8b5e3a4492b83fda12a09fe79c521e298ef7af93"} Dec 11 18:38:54 crc kubenswrapper[4877]: I1211 18:38:54.109204 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jltg" event={"ID":"2c5dead4-2f71-4693-83e2-202f305bca79","Type":"ContainerStarted","Data":"01940c1b63fb981d51dfbe6c576248f9162d04bac632ac33aa8c8c49bcc4a919"} Dec 11 18:38:54 crc kubenswrapper[4877]: I1211 18:38:54.111634 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fxfz" event={"ID":"766180e4-649c-4e2b-b36b-005aec7dc0ea","Type":"ContainerStarted","Data":"2fc0cb3ff84cb014d1c9ba53fcd7416e3859a65add3d61abd332c09bf4ad1bce"} Dec 11 18:38:54 crc kubenswrapper[4877]: I1211 18:38:54.151714 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8jltg" podStartSLOduration=2.542334339 podStartE2EDuration="5.151685661s" podCreationTimestamp="2025-12-11 18:38:49 +0000 UTC" firstStartedPulling="2025-12-11 18:38:51.076050891 +0000 UTC m=+2292.102294935" lastFinishedPulling="2025-12-11 18:38:53.685402213 +0000 UTC m=+2294.711646257" observedRunningTime="2025-12-11 18:38:54.131393497 +0000 UTC m=+2295.157637561" watchObservedRunningTime="2025-12-11 18:38:54.151685661 +0000 UTC m=+2295.177929745" Dec 11 18:38:56 crc kubenswrapper[4877]: I1211 18:38:56.215430 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:38:56 crc kubenswrapper[4877]: E1211 18:38:56.216302 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:38:57 crc kubenswrapper[4877]: I1211 18:38:57.143066 4877 generic.go:334] "Generic (PLEG): container finished" podID="766180e4-649c-4e2b-b36b-005aec7dc0ea" containerID="2fc0cb3ff84cb014d1c9ba53fcd7416e3859a65add3d61abd332c09bf4ad1bce" exitCode=0 Dec 11 18:38:57 crc kubenswrapper[4877]: I1211 18:38:57.143135 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fxfz" event={"ID":"766180e4-649c-4e2b-b36b-005aec7dc0ea","Type":"ContainerDied","Data":"2fc0cb3ff84cb014d1c9ba53fcd7416e3859a65add3d61abd332c09bf4ad1bce"} Dec 11 18:38:59 crc kubenswrapper[4877]: I1211 18:38:59.179911 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fxfz" event={"ID":"766180e4-649c-4e2b-b36b-005aec7dc0ea","Type":"ContainerStarted","Data":"35fc062b3b3e21f57ba37e7718393e6bf1f1df90f23f231d7c18b3f481d3e3f2"} Dec 11 18:38:59 crc kubenswrapper[4877]: I1211 18:38:59.224340 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8fxfz" podStartSLOduration=2.958370304 podStartE2EDuration="8.224251365s" podCreationTimestamp="2025-12-11 18:38:51 +0000 UTC" firstStartedPulling="2025-12-11 18:38:53.094127615 +0000 UTC m=+2294.120371659" lastFinishedPulling="2025-12-11 18:38:58.360008666 +0000 UTC m=+2299.386252720" observedRunningTime="2025-12-11 18:38:59.209727861 +0000 UTC m=+2300.235971925" watchObservedRunningTime="2025-12-11 18:38:59.224251365 +0000 UTC m=+2300.250495439" Dec 11 18:38:59 crc kubenswrapper[4877]: I1211 18:38:59.743861 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:38:59 crc kubenswrapper[4877]: I1211 18:38:59.746714 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:38:59 crc kubenswrapper[4877]: I1211 18:38:59.831450 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:39:00 crc kubenswrapper[4877]: I1211 18:39:00.246721 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:39:02 crc kubenswrapper[4877]: I1211 18:39:02.000756 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jltg"] Dec 11 18:39:02 crc kubenswrapper[4877]: I1211 18:39:02.150286 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:39:02 crc kubenswrapper[4877]: I1211 18:39:02.151776 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:39:02 crc kubenswrapper[4877]: I1211 18:39:02.208322 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8jltg" podUID="2c5dead4-2f71-4693-83e2-202f305bca79" containerName="registry-server" containerID="cri-o://01940c1b63fb981d51dfbe6c576248f9162d04bac632ac33aa8c8c49bcc4a919" gracePeriod=2 Dec 11 18:39:02 crc kubenswrapper[4877]: I1211 18:39:02.731193 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:39:02 crc kubenswrapper[4877]: I1211 18:39:02.848811 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5dead4-2f71-4693-83e2-202f305bca79-utilities\") pod \"2c5dead4-2f71-4693-83e2-202f305bca79\" (UID: \"2c5dead4-2f71-4693-83e2-202f305bca79\") " Dec 11 18:39:02 crc kubenswrapper[4877]: I1211 18:39:02.848866 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5dead4-2f71-4693-83e2-202f305bca79-catalog-content\") pod \"2c5dead4-2f71-4693-83e2-202f305bca79\" (UID: \"2c5dead4-2f71-4693-83e2-202f305bca79\") " Dec 11 18:39:02 crc kubenswrapper[4877]: I1211 18:39:02.849140 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncjj9\" (UniqueName: \"kubernetes.io/projected/2c5dead4-2f71-4693-83e2-202f305bca79-kube-api-access-ncjj9\") pod \"2c5dead4-2f71-4693-83e2-202f305bca79\" (UID: \"2c5dead4-2f71-4693-83e2-202f305bca79\") " Dec 11 18:39:02 crc kubenswrapper[4877]: I1211 18:39:02.849727 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c5dead4-2f71-4693-83e2-202f305bca79-utilities" (OuterVolumeSpecName: "utilities") pod "2c5dead4-2f71-4693-83e2-202f305bca79" (UID: "2c5dead4-2f71-4693-83e2-202f305bca79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:39:02 crc kubenswrapper[4877]: I1211 18:39:02.856704 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c5dead4-2f71-4693-83e2-202f305bca79-kube-api-access-ncjj9" (OuterVolumeSpecName: "kube-api-access-ncjj9") pod "2c5dead4-2f71-4693-83e2-202f305bca79" (UID: "2c5dead4-2f71-4693-83e2-202f305bca79"). InnerVolumeSpecName "kube-api-access-ncjj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:39:02 crc kubenswrapper[4877]: I1211 18:39:02.912669 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c5dead4-2f71-4693-83e2-202f305bca79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c5dead4-2f71-4693-83e2-202f305bca79" (UID: "2c5dead4-2f71-4693-83e2-202f305bca79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:39:02 crc kubenswrapper[4877]: I1211 18:39:02.951747 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncjj9\" (UniqueName: \"kubernetes.io/projected/2c5dead4-2f71-4693-83e2-202f305bca79-kube-api-access-ncjj9\") on node \"crc\" DevicePath \"\"" Dec 11 18:39:02 crc kubenswrapper[4877]: I1211 18:39:02.951795 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5dead4-2f71-4693-83e2-202f305bca79-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:39:02 crc kubenswrapper[4877]: I1211 18:39:02.951810 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5dead4-2f71-4693-83e2-202f305bca79-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.220189 4877 generic.go:334] "Generic (PLEG): container finished" podID="2c5dead4-2f71-4693-83e2-202f305bca79" containerID="01940c1b63fb981d51dfbe6c576248f9162d04bac632ac33aa8c8c49bcc4a919" exitCode=0 Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.220275 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jltg" Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.230130 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fxfz" podUID="766180e4-649c-4e2b-b36b-005aec7dc0ea" containerName="registry-server" probeResult="failure" output=< Dec 11 18:39:03 crc kubenswrapper[4877]: timeout: failed to connect service ":50051" within 1s Dec 11 18:39:03 crc kubenswrapper[4877]: > Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.233558 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jltg" event={"ID":"2c5dead4-2f71-4693-83e2-202f305bca79","Type":"ContainerDied","Data":"01940c1b63fb981d51dfbe6c576248f9162d04bac632ac33aa8c8c49bcc4a919"} Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.233626 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jltg" event={"ID":"2c5dead4-2f71-4693-83e2-202f305bca79","Type":"ContainerDied","Data":"6ceca4a3e80b06b61b5a7b9d6e652b468fbdf355b92b99facb8f5070580f155d"} Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.233658 4877 scope.go:117] "RemoveContainer" containerID="01940c1b63fb981d51dfbe6c576248f9162d04bac632ac33aa8c8c49bcc4a919" Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.268656 4877 scope.go:117] "RemoveContainer" containerID="83de1dc3c5c9020e6994c3bc8b5e3a4492b83fda12a09fe79c521e298ef7af93" Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.268786 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jltg"] Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.280649 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8jltg"] Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.296929 4877 scope.go:117] "RemoveContainer" containerID="7fac2b86e37813a1936623d5bad457cd0d48a4e4710f01c00802d369556ae5df" Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.337241 4877 scope.go:117] "RemoveContainer" containerID="01940c1b63fb981d51dfbe6c576248f9162d04bac632ac33aa8c8c49bcc4a919" Dec 11 18:39:03 crc kubenswrapper[4877]: E1211 18:39:03.338181 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01940c1b63fb981d51dfbe6c576248f9162d04bac632ac33aa8c8c49bcc4a919\": container with ID starting with 01940c1b63fb981d51dfbe6c576248f9162d04bac632ac33aa8c8c49bcc4a919 not found: ID does not exist" containerID="01940c1b63fb981d51dfbe6c576248f9162d04bac632ac33aa8c8c49bcc4a919" Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.338255 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01940c1b63fb981d51dfbe6c576248f9162d04bac632ac33aa8c8c49bcc4a919"} err="failed to get container status \"01940c1b63fb981d51dfbe6c576248f9162d04bac632ac33aa8c8c49bcc4a919\": rpc error: code = NotFound desc = could not find container \"01940c1b63fb981d51dfbe6c576248f9162d04bac632ac33aa8c8c49bcc4a919\": container with ID starting with 01940c1b63fb981d51dfbe6c576248f9162d04bac632ac33aa8c8c49bcc4a919 not found: ID does not exist" Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.338309 4877 scope.go:117] "RemoveContainer" containerID="83de1dc3c5c9020e6994c3bc8b5e3a4492b83fda12a09fe79c521e298ef7af93" Dec 11 18:39:03 crc kubenswrapper[4877]: E1211 18:39:03.338847 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83de1dc3c5c9020e6994c3bc8b5e3a4492b83fda12a09fe79c521e298ef7af93\": container with ID starting with 83de1dc3c5c9020e6994c3bc8b5e3a4492b83fda12a09fe79c521e298ef7af93 not found: ID does not exist" containerID="83de1dc3c5c9020e6994c3bc8b5e3a4492b83fda12a09fe79c521e298ef7af93" Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.338877 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83de1dc3c5c9020e6994c3bc8b5e3a4492b83fda12a09fe79c521e298ef7af93"} err="failed to get container status \"83de1dc3c5c9020e6994c3bc8b5e3a4492b83fda12a09fe79c521e298ef7af93\": rpc error: code = NotFound desc = could not find container \"83de1dc3c5c9020e6994c3bc8b5e3a4492b83fda12a09fe79c521e298ef7af93\": container with ID starting with 83de1dc3c5c9020e6994c3bc8b5e3a4492b83fda12a09fe79c521e298ef7af93 not found: ID does not exist" Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.338896 4877 scope.go:117] "RemoveContainer" containerID="7fac2b86e37813a1936623d5bad457cd0d48a4e4710f01c00802d369556ae5df" Dec 11 18:39:03 crc kubenswrapper[4877]: E1211 18:39:03.339246 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fac2b86e37813a1936623d5bad457cd0d48a4e4710f01c00802d369556ae5df\": container with ID starting with 7fac2b86e37813a1936623d5bad457cd0d48a4e4710f01c00802d369556ae5df not found: ID does not exist" containerID="7fac2b86e37813a1936623d5bad457cd0d48a4e4710f01c00802d369556ae5df" Dec 11 18:39:03 crc kubenswrapper[4877]: I1211 18:39:03.339295 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fac2b86e37813a1936623d5bad457cd0d48a4e4710f01c00802d369556ae5df"} err="failed to get container status \"7fac2b86e37813a1936623d5bad457cd0d48a4e4710f01c00802d369556ae5df\": rpc error: code = NotFound desc = could not find container \"7fac2b86e37813a1936623d5bad457cd0d48a4e4710f01c00802d369556ae5df\": container with ID starting with 7fac2b86e37813a1936623d5bad457cd0d48a4e4710f01c00802d369556ae5df not found: ID does not exist" Dec 11 18:39:05 crc kubenswrapper[4877]: I1211 18:39:05.230418 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c5dead4-2f71-4693-83e2-202f305bca79" path="/var/lib/kubelet/pods/2c5dead4-2f71-4693-83e2-202f305bca79/volumes" Dec 11 18:39:10 crc kubenswrapper[4877]: I1211 18:39:10.215688 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:39:10 crc kubenswrapper[4877]: E1211 18:39:10.216833 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:39:12 crc kubenswrapper[4877]: I1211 18:39:12.235439 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:39:12 crc kubenswrapper[4877]: I1211 18:39:12.299467 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:39:12 crc kubenswrapper[4877]: I1211 18:39:12.473824 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8fxfz"] Dec 11 18:39:13 crc kubenswrapper[4877]: I1211 18:39:13.336538 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8fxfz" podUID="766180e4-649c-4e2b-b36b-005aec7dc0ea" containerName="registry-server" containerID="cri-o://35fc062b3b3e21f57ba37e7718393e6bf1f1df90f23f231d7c18b3f481d3e3f2" gracePeriod=2 Dec 11 18:39:13 crc kubenswrapper[4877]: I1211 18:39:13.826767 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:39:13 crc kubenswrapper[4877]: I1211 18:39:13.907044 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766180e4-649c-4e2b-b36b-005aec7dc0ea-utilities\") pod \"766180e4-649c-4e2b-b36b-005aec7dc0ea\" (UID: \"766180e4-649c-4e2b-b36b-005aec7dc0ea\") " Dec 11 18:39:13 crc kubenswrapper[4877]: I1211 18:39:13.907089 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766180e4-649c-4e2b-b36b-005aec7dc0ea-catalog-content\") pod \"766180e4-649c-4e2b-b36b-005aec7dc0ea\" (UID: \"766180e4-649c-4e2b-b36b-005aec7dc0ea\") " Dec 11 18:39:13 crc kubenswrapper[4877]: I1211 18:39:13.907163 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zhcl\" (UniqueName: \"kubernetes.io/projected/766180e4-649c-4e2b-b36b-005aec7dc0ea-kube-api-access-4zhcl\") pod \"766180e4-649c-4e2b-b36b-005aec7dc0ea\" (UID: \"766180e4-649c-4e2b-b36b-005aec7dc0ea\") " Dec 11 18:39:13 crc kubenswrapper[4877]: I1211 18:39:13.908446 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/766180e4-649c-4e2b-b36b-005aec7dc0ea-utilities" (OuterVolumeSpecName: "utilities") pod "766180e4-649c-4e2b-b36b-005aec7dc0ea" (UID: "766180e4-649c-4e2b-b36b-005aec7dc0ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:39:13 crc kubenswrapper[4877]: I1211 18:39:13.915527 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766180e4-649c-4e2b-b36b-005aec7dc0ea-kube-api-access-4zhcl" (OuterVolumeSpecName: "kube-api-access-4zhcl") pod "766180e4-649c-4e2b-b36b-005aec7dc0ea" (UID: "766180e4-649c-4e2b-b36b-005aec7dc0ea"). InnerVolumeSpecName "kube-api-access-4zhcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.010143 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766180e4-649c-4e2b-b36b-005aec7dc0ea-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.010428 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zhcl\" (UniqueName: \"kubernetes.io/projected/766180e4-649c-4e2b-b36b-005aec7dc0ea-kube-api-access-4zhcl\") on node \"crc\" DevicePath \"\"" Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.056044 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/766180e4-649c-4e2b-b36b-005aec7dc0ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "766180e4-649c-4e2b-b36b-005aec7dc0ea" (UID: "766180e4-649c-4e2b-b36b-005aec7dc0ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.112575 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766180e4-649c-4e2b-b36b-005aec7dc0ea-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.351129 4877 generic.go:334] "Generic (PLEG): container finished" podID="766180e4-649c-4e2b-b36b-005aec7dc0ea" containerID="35fc062b3b3e21f57ba37e7718393e6bf1f1df90f23f231d7c18b3f481d3e3f2" exitCode=0 Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.351194 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fxfz" Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.351192 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fxfz" event={"ID":"766180e4-649c-4e2b-b36b-005aec7dc0ea","Type":"ContainerDied","Data":"35fc062b3b3e21f57ba37e7718393e6bf1f1df90f23f231d7c18b3f481d3e3f2"} Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.351355 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fxfz" event={"ID":"766180e4-649c-4e2b-b36b-005aec7dc0ea","Type":"ContainerDied","Data":"3934c8935261d6b8a203c492b7ca7efae22492385529aec782935fc8aae13889"} Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.351455 4877 scope.go:117] "RemoveContainer" containerID="35fc062b3b3e21f57ba37e7718393e6bf1f1df90f23f231d7c18b3f481d3e3f2" Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.395438 4877 scope.go:117] "RemoveContainer" containerID="2fc0cb3ff84cb014d1c9ba53fcd7416e3859a65add3d61abd332c09bf4ad1bce" Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.410833 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8fxfz"] Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.434055 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8fxfz"] Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.436504 4877 scope.go:117] "RemoveContainer" containerID="fba04966ca6eb3a95a59747747b1df9fc634876de2dd73db3a2b774cab0c8be8" Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.485275 4877 scope.go:117] "RemoveContainer" containerID="35fc062b3b3e21f57ba37e7718393e6bf1f1df90f23f231d7c18b3f481d3e3f2" Dec 11 18:39:14 crc kubenswrapper[4877]: E1211 18:39:14.486036 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35fc062b3b3e21f57ba37e7718393e6bf1f1df90f23f231d7c18b3f481d3e3f2\": container with ID starting with 35fc062b3b3e21f57ba37e7718393e6bf1f1df90f23f231d7c18b3f481d3e3f2 not found: ID does not exist" containerID="35fc062b3b3e21f57ba37e7718393e6bf1f1df90f23f231d7c18b3f481d3e3f2" Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.486161 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35fc062b3b3e21f57ba37e7718393e6bf1f1df90f23f231d7c18b3f481d3e3f2"} err="failed to get container status \"35fc062b3b3e21f57ba37e7718393e6bf1f1df90f23f231d7c18b3f481d3e3f2\": rpc error: code = NotFound desc = could not find container \"35fc062b3b3e21f57ba37e7718393e6bf1f1df90f23f231d7c18b3f481d3e3f2\": container with ID starting with 35fc062b3b3e21f57ba37e7718393e6bf1f1df90f23f231d7c18b3f481d3e3f2 not found: ID does not exist" Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.486253 4877 scope.go:117] "RemoveContainer" containerID="2fc0cb3ff84cb014d1c9ba53fcd7416e3859a65add3d61abd332c09bf4ad1bce" Dec 11 18:39:14 crc kubenswrapper[4877]: E1211 18:39:14.486722 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fc0cb3ff84cb014d1c9ba53fcd7416e3859a65add3d61abd332c09bf4ad1bce\": container with ID starting with 2fc0cb3ff84cb014d1c9ba53fcd7416e3859a65add3d61abd332c09bf4ad1bce not found: ID does not exist" containerID="2fc0cb3ff84cb014d1c9ba53fcd7416e3859a65add3d61abd332c09bf4ad1bce" Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.486765 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fc0cb3ff84cb014d1c9ba53fcd7416e3859a65add3d61abd332c09bf4ad1bce"} err="failed to get container status \"2fc0cb3ff84cb014d1c9ba53fcd7416e3859a65add3d61abd332c09bf4ad1bce\": rpc error: code = NotFound desc = could not find container \"2fc0cb3ff84cb014d1c9ba53fcd7416e3859a65add3d61abd332c09bf4ad1bce\": container with ID starting with 2fc0cb3ff84cb014d1c9ba53fcd7416e3859a65add3d61abd332c09bf4ad1bce not found: ID does not exist" Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.486785 4877 scope.go:117] "RemoveContainer" containerID="fba04966ca6eb3a95a59747747b1df9fc634876de2dd73db3a2b774cab0c8be8" Dec 11 18:39:14 crc kubenswrapper[4877]: E1211 18:39:14.487152 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fba04966ca6eb3a95a59747747b1df9fc634876de2dd73db3a2b774cab0c8be8\": container with ID starting with fba04966ca6eb3a95a59747747b1df9fc634876de2dd73db3a2b774cab0c8be8 not found: ID does not exist" containerID="fba04966ca6eb3a95a59747747b1df9fc634876de2dd73db3a2b774cab0c8be8" Dec 11 18:39:14 crc kubenswrapper[4877]: I1211 18:39:14.487178 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba04966ca6eb3a95a59747747b1df9fc634876de2dd73db3a2b774cab0c8be8"} err="failed to get container status \"fba04966ca6eb3a95a59747747b1df9fc634876de2dd73db3a2b774cab0c8be8\": rpc error: code = NotFound desc = could not find container \"fba04966ca6eb3a95a59747747b1df9fc634876de2dd73db3a2b774cab0c8be8\": container with ID starting with fba04966ca6eb3a95a59747747b1df9fc634876de2dd73db3a2b774cab0c8be8 not found: ID does not exist" Dec 11 18:39:15 crc kubenswrapper[4877]: I1211 18:39:15.233812 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="766180e4-649c-4e2b-b36b-005aec7dc0ea" path="/var/lib/kubelet/pods/766180e4-649c-4e2b-b36b-005aec7dc0ea/volumes" Dec 11 18:39:16 crc kubenswrapper[4877]: I1211 18:39:16.637854 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:39:16 crc kubenswrapper[4877]: I1211 18:39:16.637954 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:39:16 crc kubenswrapper[4877]: I1211 18:39:16.638029 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:39:16 crc kubenswrapper[4877]: I1211 18:39:16.639186 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 18:39:16 crc kubenswrapper[4877]: I1211 18:39:16.639300 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" gracePeriod=600 Dec 11 18:39:16 crc kubenswrapper[4877]: E1211 18:39:16.777027 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:39:17 crc kubenswrapper[4877]: I1211 18:39:17.396189 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" exitCode=0 Dec 11 18:39:17 crc kubenswrapper[4877]: I1211 18:39:17.396248 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627"} Dec 11 18:39:17 crc kubenswrapper[4877]: I1211 18:39:17.396338 4877 scope.go:117] "RemoveContainer" containerID="e540e4e31a0b63dc047c6020b5d13940da5103cf44c18566d0ff205c7b394c34" Dec 11 18:39:17 crc kubenswrapper[4877]: I1211 18:39:17.397639 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:39:17 crc kubenswrapper[4877]: E1211 18:39:17.398251 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:39:21 crc kubenswrapper[4877]: I1211 18:39:21.215997 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:39:21 crc kubenswrapper[4877]: E1211 18:39:21.216918 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:39:28 crc kubenswrapper[4877]: I1211 18:39:28.216160 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:39:28 crc kubenswrapper[4877]: E1211 18:39:28.217519 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:39:36 crc kubenswrapper[4877]: I1211 18:39:36.215609 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:39:36 crc kubenswrapper[4877]: E1211 18:39:36.216247 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:39:40 crc kubenswrapper[4877]: I1211 18:39:40.216837 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:39:40 crc kubenswrapper[4877]: E1211 18:39:40.217732 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:39:48 crc kubenswrapper[4877]: I1211 18:39:48.216055 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:39:48 crc kubenswrapper[4877]: E1211 18:39:48.217228 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:39:51 crc kubenswrapper[4877]: I1211 18:39:51.216351 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:39:51 crc kubenswrapper[4877]: E1211 18:39:51.217204 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:40:03 crc kubenswrapper[4877]: I1211 18:40:03.215983 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:40:03 crc kubenswrapper[4877]: E1211 18:40:03.216976 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:40:03 crc kubenswrapper[4877]: I1211 18:40:03.217167 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:40:03 crc kubenswrapper[4877]: E1211 18:40:03.217430 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:40:09 crc kubenswrapper[4877]: I1211 18:40:09.031135 4877 generic.go:334] "Generic (PLEG): container finished" podID="24092f15-2f1a-441e-a0b9-8bf295b95bd0" containerID="09473fd3563b3f5020e2f30fb0b682d70c7d8ba1eb4d3ec2731bf0ad6c62f2f4" exitCode=0 Dec 11 18:40:09 crc kubenswrapper[4877]: I1211 18:40:09.031254 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" event={"ID":"24092f15-2f1a-441e-a0b9-8bf295b95bd0","Type":"ContainerDied","Data":"09473fd3563b3f5020e2f30fb0b682d70c7d8ba1eb4d3ec2731bf0ad6c62f2f4"} Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.567754 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.669314 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-inventory\") pod \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.669353 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-libvirt-secret-0\") pod \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.669512 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-ssh-key\") pod \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.669637 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p5zx\" (UniqueName: \"kubernetes.io/projected/24092f15-2f1a-441e-a0b9-8bf295b95bd0-kube-api-access-7p5zx\") pod \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.669720 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-libvirt-combined-ca-bundle\") pod \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\" (UID: \"24092f15-2f1a-441e-a0b9-8bf295b95bd0\") " Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.675878 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "24092f15-2f1a-441e-a0b9-8bf295b95bd0" (UID: "24092f15-2f1a-441e-a0b9-8bf295b95bd0"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.677454 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24092f15-2f1a-441e-a0b9-8bf295b95bd0-kube-api-access-7p5zx" (OuterVolumeSpecName: "kube-api-access-7p5zx") pod "24092f15-2f1a-441e-a0b9-8bf295b95bd0" (UID: "24092f15-2f1a-441e-a0b9-8bf295b95bd0"). InnerVolumeSpecName "kube-api-access-7p5zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.708037 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "24092f15-2f1a-441e-a0b9-8bf295b95bd0" (UID: "24092f15-2f1a-441e-a0b9-8bf295b95bd0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.714579 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-inventory" (OuterVolumeSpecName: "inventory") pod "24092f15-2f1a-441e-a0b9-8bf295b95bd0" (UID: "24092f15-2f1a-441e-a0b9-8bf295b95bd0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.715516 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "24092f15-2f1a-441e-a0b9-8bf295b95bd0" (UID: "24092f15-2f1a-441e-a0b9-8bf295b95bd0"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.772489 4877 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.772550 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.772568 4877 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.772582 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24092f15-2f1a-441e-a0b9-8bf295b95bd0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:40:10 crc kubenswrapper[4877]: I1211 18:40:10.772783 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p5zx\" (UniqueName: \"kubernetes.io/projected/24092f15-2f1a-441e-a0b9-8bf295b95bd0-kube-api-access-7p5zx\") on node \"crc\" DevicePath \"\"" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.052762 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" event={"ID":"24092f15-2f1a-441e-a0b9-8bf295b95bd0","Type":"ContainerDied","Data":"7616e3e9433ae47a961a12f2ed32e228df6fb1ba37bcc76b07184a042ffc43f1"} Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.052801 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7616e3e9433ae47a961a12f2ed32e228df6fb1ba37bcc76b07184a042ffc43f1" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.052830 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.185849 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p"] Dec 11 18:40:11 crc kubenswrapper[4877]: E1211 18:40:11.186537 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24092f15-2f1a-441e-a0b9-8bf295b95bd0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.186568 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="24092f15-2f1a-441e-a0b9-8bf295b95bd0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 11 18:40:11 crc kubenswrapper[4877]: E1211 18:40:11.186587 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5dead4-2f71-4693-83e2-202f305bca79" containerName="registry-server" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.186601 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5dead4-2f71-4693-83e2-202f305bca79" containerName="registry-server" Dec 11 18:40:11 crc kubenswrapper[4877]: E1211 18:40:11.186625 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766180e4-649c-4e2b-b36b-005aec7dc0ea" containerName="registry-server" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.186634 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="766180e4-649c-4e2b-b36b-005aec7dc0ea" containerName="registry-server" Dec 11 18:40:11 crc kubenswrapper[4877]: E1211 18:40:11.186656 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766180e4-649c-4e2b-b36b-005aec7dc0ea" containerName="extract-utilities" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.186667 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="766180e4-649c-4e2b-b36b-005aec7dc0ea" containerName="extract-utilities" Dec 11 18:40:11 crc kubenswrapper[4877]: E1211 18:40:11.186687 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766180e4-649c-4e2b-b36b-005aec7dc0ea" containerName="extract-content" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.186695 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="766180e4-649c-4e2b-b36b-005aec7dc0ea" containerName="extract-content" Dec 11 18:40:11 crc kubenswrapper[4877]: E1211 18:40:11.186719 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5dead4-2f71-4693-83e2-202f305bca79" containerName="extract-utilities" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.186744 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5dead4-2f71-4693-83e2-202f305bca79" containerName="extract-utilities" Dec 11 18:40:11 crc kubenswrapper[4877]: E1211 18:40:11.186775 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5dead4-2f71-4693-83e2-202f305bca79" containerName="extract-content" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.186783 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5dead4-2f71-4693-83e2-202f305bca79" containerName="extract-content" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.187172 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="766180e4-649c-4e2b-b36b-005aec7dc0ea" containerName="registry-server" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.187197 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="24092f15-2f1a-441e-a0b9-8bf295b95bd0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.187223 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c5dead4-2f71-4693-83e2-202f305bca79" containerName="registry-server" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.192229 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.194519 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.199008 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p"] Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.199118 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.199119 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.199324 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.199344 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.199487 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.199693 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.382628 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.383025 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.383667 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.385401 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgz4r\" (UniqueName: \"kubernetes.io/projected/27668d56-a427-4392-85d2-4e4cc52342aa-kube-api-access-qgz4r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.385613 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.385886 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/27668d56-a427-4392-85d2-4e4cc52342aa-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.386030 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.386240 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.386402 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.488629 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.488709 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.488736 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.488783 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgz4r\" (UniqueName: \"kubernetes.io/projected/27668d56-a427-4392-85d2-4e4cc52342aa-kube-api-access-qgz4r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.488821 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.488882 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/27668d56-a427-4392-85d2-4e4cc52342aa-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.488903 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.488946 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.488969 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.490483 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/27668d56-a427-4392-85d2-4e4cc52342aa-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.495926 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.496181 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.496588 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.497006 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.498029 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.500354 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.501477 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.517025 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgz4r\" (UniqueName: \"kubernetes.io/projected/27668d56-a427-4392-85d2-4e4cc52342aa-kube-api-access-qgz4r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6b22p\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:11 crc kubenswrapper[4877]: I1211 18:40:11.517568 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:40:12 crc kubenswrapper[4877]: I1211 18:40:12.267404 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p"] Dec 11 18:40:12 crc kubenswrapper[4877]: W1211 18:40:12.277768 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27668d56_a427_4392_85d2_4e4cc52342aa.slice/crio-afaca4b71f476430a8c0c4be07638b24150d229290f881727720b19b746376af WatchSource:0}: Error finding container afaca4b71f476430a8c0c4be07638b24150d229290f881727720b19b746376af: Status 404 returned error can't find the container with id afaca4b71f476430a8c0c4be07638b24150d229290f881727720b19b746376af Dec 11 18:40:13 crc kubenswrapper[4877]: I1211 18:40:13.076044 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" event={"ID":"27668d56-a427-4392-85d2-4e4cc52342aa","Type":"ContainerStarted","Data":"afaca4b71f476430a8c0c4be07638b24150d229290f881727720b19b746376af"} Dec 11 18:40:14 crc kubenswrapper[4877]: I1211 18:40:14.086132 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" event={"ID":"27668d56-a427-4392-85d2-4e4cc52342aa","Type":"ContainerStarted","Data":"0804d9763785536be2e4dc5bd7a1a6128e53e436ef79fafc5afd745cf1374654"} Dec 11 18:40:14 crc kubenswrapper[4877]: I1211 18:40:14.215585 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:40:14 crc kubenswrapper[4877]: E1211 18:40:14.216063 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:40:18 crc kubenswrapper[4877]: I1211 18:40:18.216460 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:40:18 crc kubenswrapper[4877]: E1211 18:40:18.217610 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:40:29 crc kubenswrapper[4877]: I1211 18:40:29.230608 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:40:29 crc kubenswrapper[4877]: E1211 18:40:29.234053 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:40:31 crc kubenswrapper[4877]: I1211 18:40:31.216205 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:40:31 crc kubenswrapper[4877]: E1211 18:40:31.217091 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:40:41 crc kubenswrapper[4877]: I1211 18:40:41.670298 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7b4dd6dd9-6mv2v" podUID="4808e7d5-7e53-4b59-a46c-86838df224c0" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 11 18:40:43 crc kubenswrapper[4877]: I1211 18:40:43.216219 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:40:43 crc kubenswrapper[4877]: E1211 18:40:43.216781 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:40:45 crc kubenswrapper[4877]: I1211 18:40:45.215969 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:40:45 crc kubenswrapper[4877]: E1211 18:40:45.216663 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:40:56 crc kubenswrapper[4877]: I1211 18:40:56.216062 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:40:56 crc kubenswrapper[4877]: E1211 18:40:56.216686 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:40:58 crc kubenswrapper[4877]: I1211 18:40:58.216537 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:40:58 crc kubenswrapper[4877]: E1211 18:40:58.217435 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:41:09 crc kubenswrapper[4877]: I1211 18:41:09.230066 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:41:09 crc kubenswrapper[4877]: E1211 18:41:09.230919 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:41:10 crc kubenswrapper[4877]: I1211 18:41:10.215188 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:41:10 crc kubenswrapper[4877]: E1211 18:41:10.215779 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:41:21 crc kubenswrapper[4877]: I1211 18:41:21.216309 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:41:21 crc kubenswrapper[4877]: E1211 18:41:21.217673 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:41:23 crc kubenswrapper[4877]: I1211 18:41:23.215455 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:41:23 crc kubenswrapper[4877]: E1211 18:41:23.216147 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.216713 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:41:35 crc kubenswrapper[4877]: E1211 18:41:35.218056 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.420303 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" podStartSLOduration=83.775133512 podStartE2EDuration="1m24.420274311s" podCreationTimestamp="2025-12-11 18:40:11 +0000 UTC" firstStartedPulling="2025-12-11 18:40:12.280646742 +0000 UTC m=+2373.306890786" lastFinishedPulling="2025-12-11 18:40:12.925787501 +0000 UTC m=+2373.952031585" observedRunningTime="2025-12-11 18:40:14.122690454 +0000 UTC m=+2375.148934488" watchObservedRunningTime="2025-12-11 18:41:35.420274311 +0000 UTC m=+2456.446518365" Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.426313 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-df92c"] Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.428700 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.442374 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-df92c"] Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.504720 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00bd5c61-a49a-4020-8e9a-fa130c65c7e2-catalog-content\") pod \"certified-operators-df92c\" (UID: \"00bd5c61-a49a-4020-8e9a-fa130c65c7e2\") " pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.504953 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkst5\" (UniqueName: \"kubernetes.io/projected/00bd5c61-a49a-4020-8e9a-fa130c65c7e2-kube-api-access-xkst5\") pod \"certified-operators-df92c\" (UID: \"00bd5c61-a49a-4020-8e9a-fa130c65c7e2\") " pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.505099 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00bd5c61-a49a-4020-8e9a-fa130c65c7e2-utilities\") pod \"certified-operators-df92c\" (UID: \"00bd5c61-a49a-4020-8e9a-fa130c65c7e2\") " pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.607275 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00bd5c61-a49a-4020-8e9a-fa130c65c7e2-catalog-content\") pod \"certified-operators-df92c\" (UID: \"00bd5c61-a49a-4020-8e9a-fa130c65c7e2\") " pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.607854 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00bd5c61-a49a-4020-8e9a-fa130c65c7e2-catalog-content\") pod \"certified-operators-df92c\" (UID: \"00bd5c61-a49a-4020-8e9a-fa130c65c7e2\") " pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.607993 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkst5\" (UniqueName: \"kubernetes.io/projected/00bd5c61-a49a-4020-8e9a-fa130c65c7e2-kube-api-access-xkst5\") pod \"certified-operators-df92c\" (UID: \"00bd5c61-a49a-4020-8e9a-fa130c65c7e2\") " pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.608491 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00bd5c61-a49a-4020-8e9a-fa130c65c7e2-utilities\") pod \"certified-operators-df92c\" (UID: \"00bd5c61-a49a-4020-8e9a-fa130c65c7e2\") " pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.608816 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00bd5c61-a49a-4020-8e9a-fa130c65c7e2-utilities\") pod \"certified-operators-df92c\" (UID: \"00bd5c61-a49a-4020-8e9a-fa130c65c7e2\") " pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.641182 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkst5\" (UniqueName: \"kubernetes.io/projected/00bd5c61-a49a-4020-8e9a-fa130c65c7e2-kube-api-access-xkst5\") pod \"certified-operators-df92c\" (UID: \"00bd5c61-a49a-4020-8e9a-fa130c65c7e2\") " pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:35 crc kubenswrapper[4877]: I1211 18:41:35.767847 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:36 crc kubenswrapper[4877]: I1211 18:41:36.215194 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:41:36 crc kubenswrapper[4877]: E1211 18:41:36.216155 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:41:36 crc kubenswrapper[4877]: I1211 18:41:36.305878 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-df92c"] Dec 11 18:41:37 crc kubenswrapper[4877]: I1211 18:41:37.049505 4877 generic.go:334] "Generic (PLEG): container finished" podID="00bd5c61-a49a-4020-8e9a-fa130c65c7e2" containerID="d12b7263e41c65e7c41ee766ab08187225a560ea3ed0b50dde7e5b18b716e850" exitCode=0 Dec 11 18:41:37 crc kubenswrapper[4877]: I1211 18:41:37.049707 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-df92c" event={"ID":"00bd5c61-a49a-4020-8e9a-fa130c65c7e2","Type":"ContainerDied","Data":"d12b7263e41c65e7c41ee766ab08187225a560ea3ed0b50dde7e5b18b716e850"} Dec 11 18:41:37 crc kubenswrapper[4877]: I1211 18:41:37.049881 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-df92c" event={"ID":"00bd5c61-a49a-4020-8e9a-fa130c65c7e2","Type":"ContainerStarted","Data":"fd48f3eae72e131807dae5a1a4999306b44512b8e881511b538ed6779e52c45a"} Dec 11 18:41:43 crc kubenswrapper[4877]: I1211 18:41:43.125001 4877 generic.go:334] "Generic (PLEG): container finished" podID="00bd5c61-a49a-4020-8e9a-fa130c65c7e2" containerID="2c0e566925e16a95752a43344b039fd6cd0652377998ae37de2c25aae7963abf" exitCode=0 Dec 11 18:41:43 crc kubenswrapper[4877]: I1211 18:41:43.125179 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-df92c" event={"ID":"00bd5c61-a49a-4020-8e9a-fa130c65c7e2","Type":"ContainerDied","Data":"2c0e566925e16a95752a43344b039fd6cd0652377998ae37de2c25aae7963abf"} Dec 11 18:41:44 crc kubenswrapper[4877]: I1211 18:41:44.136305 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-df92c" event={"ID":"00bd5c61-a49a-4020-8e9a-fa130c65c7e2","Type":"ContainerStarted","Data":"8e06c512d3ea2a643e5b2bed9d83f5e523924ca5ad0ac8238bab0f283db92fb4"} Dec 11 18:41:44 crc kubenswrapper[4877]: I1211 18:41:44.160540 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-df92c" podStartSLOduration=2.500059533 podStartE2EDuration="9.160506264s" podCreationTimestamp="2025-12-11 18:41:35 +0000 UTC" firstStartedPulling="2025-12-11 18:41:37.052046733 +0000 UTC m=+2458.078290817" lastFinishedPulling="2025-12-11 18:41:43.712493464 +0000 UTC m=+2464.738737548" observedRunningTime="2025-12-11 18:41:44.152557601 +0000 UTC m=+2465.178801675" watchObservedRunningTime="2025-12-11 18:41:44.160506264 +0000 UTC m=+2465.186750328" Dec 11 18:41:45 crc kubenswrapper[4877]: I1211 18:41:45.768894 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:45 crc kubenswrapper[4877]: I1211 18:41:45.769405 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:46 crc kubenswrapper[4877]: I1211 18:41:46.833317 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-df92c" podUID="00bd5c61-a49a-4020-8e9a-fa130c65c7e2" containerName="registry-server" probeResult="failure" output=< Dec 11 18:41:46 crc kubenswrapper[4877]: timeout: failed to connect service ":50051" within 1s Dec 11 18:41:46 crc kubenswrapper[4877]: > Dec 11 18:41:49 crc kubenswrapper[4877]: I1211 18:41:49.232346 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:41:49 crc kubenswrapper[4877]: I1211 18:41:49.233085 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:41:49 crc kubenswrapper[4877]: E1211 18:41:49.233468 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:41:49 crc kubenswrapper[4877]: E1211 18:41:49.233949 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:41:55 crc kubenswrapper[4877]: I1211 18:41:55.855076 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:55 crc kubenswrapper[4877]: I1211 18:41:55.929365 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-df92c" Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.022512 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-df92c"] Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.101913 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jcwhn"] Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.102426 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jcwhn" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" containerName="registry-server" containerID="cri-o://82f91495fa80b8006bd9a7b4074a11935a8d16c1e45a146fb419920ad2fbc5d9" gracePeriod=2 Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.274522 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcwhn" event={"ID":"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d","Type":"ContainerDied","Data":"82f91495fa80b8006bd9a7b4074a11935a8d16c1e45a146fb419920ad2fbc5d9"} Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.274774 4877 generic.go:334] "Generic (PLEG): container finished" podID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" containerID="82f91495fa80b8006bd9a7b4074a11935a8d16c1e45a146fb419920ad2fbc5d9" exitCode=0 Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.610880 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.717409 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq7gh\" (UniqueName: \"kubernetes.io/projected/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-kube-api-access-jq7gh\") pod \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\" (UID: \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\") " Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.717745 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-catalog-content\") pod \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\" (UID: \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\") " Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.717823 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-utilities\") pod \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\" (UID: \"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d\") " Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.719096 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-utilities" (OuterVolumeSpecName: "utilities") pod "8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" (UID: "8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.724598 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-kube-api-access-jq7gh" (OuterVolumeSpecName: "kube-api-access-jq7gh") pod "8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" (UID: "8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d"). InnerVolumeSpecName "kube-api-access-jq7gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.766997 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" (UID: "8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.829069 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.829125 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:41:56 crc kubenswrapper[4877]: I1211 18:41:56.829148 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq7gh\" (UniqueName: \"kubernetes.io/projected/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d-kube-api-access-jq7gh\") on node \"crc\" DevicePath \"\"" Dec 11 18:41:57 crc kubenswrapper[4877]: I1211 18:41:57.294660 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcwhn" Dec 11 18:41:57 crc kubenswrapper[4877]: I1211 18:41:57.294763 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcwhn" event={"ID":"8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d","Type":"ContainerDied","Data":"eeed259e3d616002540f97f583edae4ad0aff1e9581cd4c06ced49ee4d01b9f4"} Dec 11 18:41:57 crc kubenswrapper[4877]: I1211 18:41:57.294821 4877 scope.go:117] "RemoveContainer" containerID="82f91495fa80b8006bd9a7b4074a11935a8d16c1e45a146fb419920ad2fbc5d9" Dec 11 18:41:57 crc kubenswrapper[4877]: I1211 18:41:57.329098 4877 scope.go:117] "RemoveContainer" containerID="f669be5780c35422e71e2c14920203ef698971ac179b98749f04176c41459cfd" Dec 11 18:41:57 crc kubenswrapper[4877]: I1211 18:41:57.331222 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jcwhn"] Dec 11 18:41:57 crc kubenswrapper[4877]: I1211 18:41:57.343543 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jcwhn"] Dec 11 18:41:57 crc kubenswrapper[4877]: I1211 18:41:57.357144 4877 scope.go:117] "RemoveContainer" containerID="777b16c6f2dc90e1fcd9cdcf2fe429dba21de1b17f9316738e2268a0da956b61" Dec 11 18:41:59 crc kubenswrapper[4877]: I1211 18:41:59.235641 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" path="/var/lib/kubelet/pods/8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d/volumes" Dec 11 18:42:01 crc kubenswrapper[4877]: I1211 18:42:01.215577 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:42:01 crc kubenswrapper[4877]: E1211 18:42:01.216096 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:42:04 crc kubenswrapper[4877]: I1211 18:42:04.216341 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:42:04 crc kubenswrapper[4877]: E1211 18:42:04.217365 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:42:13 crc kubenswrapper[4877]: I1211 18:42:13.216328 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:42:13 crc kubenswrapper[4877]: E1211 18:42:13.218350 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:42:18 crc kubenswrapper[4877]: I1211 18:42:18.216545 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:42:18 crc kubenswrapper[4877]: E1211 18:42:18.217539 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:42:25 crc kubenswrapper[4877]: I1211 18:42:25.215355 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:42:25 crc kubenswrapper[4877]: E1211 18:42:25.216353 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:42:29 crc kubenswrapper[4877]: I1211 18:42:29.221667 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:42:29 crc kubenswrapper[4877]: E1211 18:42:29.222420 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:42:38 crc kubenswrapper[4877]: I1211 18:42:38.215731 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:42:38 crc kubenswrapper[4877]: E1211 18:42:38.216287 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:42:41 crc kubenswrapper[4877]: I1211 18:42:41.215168 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:42:41 crc kubenswrapper[4877]: E1211 18:42:41.216107 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:42:52 crc kubenswrapper[4877]: I1211 18:42:52.215988 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:42:52 crc kubenswrapper[4877]: I1211 18:42:52.216725 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:42:52 crc kubenswrapper[4877]: E1211 18:42:52.217004 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:42:52 crc kubenswrapper[4877]: E1211 18:42:52.217157 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:43:04 crc kubenswrapper[4877]: I1211 18:43:04.216940 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:43:04 crc kubenswrapper[4877]: E1211 18:43:04.218457 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:43:07 crc kubenswrapper[4877]: I1211 18:43:07.215770 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:43:07 crc kubenswrapper[4877]: E1211 18:43:07.216415 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:43:15 crc kubenswrapper[4877]: I1211 18:43:15.215394 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:43:15 crc kubenswrapper[4877]: E1211 18:43:15.216150 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:43:18 crc kubenswrapper[4877]: I1211 18:43:18.216311 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:43:18 crc kubenswrapper[4877]: E1211 18:43:18.217615 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:43:24 crc kubenswrapper[4877]: I1211 18:43:24.248999 4877 generic.go:334] "Generic (PLEG): container finished" podID="27668d56-a427-4392-85d2-4e4cc52342aa" containerID="0804d9763785536be2e4dc5bd7a1a6128e53e436ef79fafc5afd745cf1374654" exitCode=0 Dec 11 18:43:24 crc kubenswrapper[4877]: I1211 18:43:24.249100 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" event={"ID":"27668d56-a427-4392-85d2-4e4cc52342aa","Type":"ContainerDied","Data":"0804d9763785536be2e4dc5bd7a1a6128e53e436ef79fafc5afd745cf1374654"} Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.818779 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.829778 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-combined-ca-bundle\") pod \"27668d56-a427-4392-85d2-4e4cc52342aa\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.829838 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-ssh-key\") pod \"27668d56-a427-4392-85d2-4e4cc52342aa\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.829964 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgz4r\" (UniqueName: \"kubernetes.io/projected/27668d56-a427-4392-85d2-4e4cc52342aa-kube-api-access-qgz4r\") pod \"27668d56-a427-4392-85d2-4e4cc52342aa\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.829986 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-inventory\") pod \"27668d56-a427-4392-85d2-4e4cc52342aa\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.830018 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-migration-ssh-key-1\") pod \"27668d56-a427-4392-85d2-4e4cc52342aa\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.830055 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/27668d56-a427-4392-85d2-4e4cc52342aa-nova-extra-config-0\") pod \"27668d56-a427-4392-85d2-4e4cc52342aa\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.830086 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-cell1-compute-config-0\") pod \"27668d56-a427-4392-85d2-4e4cc52342aa\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.830171 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-migration-ssh-key-0\") pod \"27668d56-a427-4392-85d2-4e4cc52342aa\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.830199 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-cell1-compute-config-1\") pod \"27668d56-a427-4392-85d2-4e4cc52342aa\" (UID: \"27668d56-a427-4392-85d2-4e4cc52342aa\") " Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.837959 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "27668d56-a427-4392-85d2-4e4cc52342aa" (UID: "27668d56-a427-4392-85d2-4e4cc52342aa"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.889887 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27668d56-a427-4392-85d2-4e4cc52342aa-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "27668d56-a427-4392-85d2-4e4cc52342aa" (UID: "27668d56-a427-4392-85d2-4e4cc52342aa"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.891520 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27668d56-a427-4392-85d2-4e4cc52342aa-kube-api-access-qgz4r" (OuterVolumeSpecName: "kube-api-access-qgz4r") pod "27668d56-a427-4392-85d2-4e4cc52342aa" (UID: "27668d56-a427-4392-85d2-4e4cc52342aa"). InnerVolumeSpecName "kube-api-access-qgz4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.893553 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "27668d56-a427-4392-85d2-4e4cc52342aa" (UID: "27668d56-a427-4392-85d2-4e4cc52342aa"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.893587 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "27668d56-a427-4392-85d2-4e4cc52342aa" (UID: "27668d56-a427-4392-85d2-4e4cc52342aa"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.898857 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "27668d56-a427-4392-85d2-4e4cc52342aa" (UID: "27668d56-a427-4392-85d2-4e4cc52342aa"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.900630 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "27668d56-a427-4392-85d2-4e4cc52342aa" (UID: "27668d56-a427-4392-85d2-4e4cc52342aa"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.925514 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "27668d56-a427-4392-85d2-4e4cc52342aa" (UID: "27668d56-a427-4392-85d2-4e4cc52342aa"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.931846 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgz4r\" (UniqueName: \"kubernetes.io/projected/27668d56-a427-4392-85d2-4e4cc52342aa-kube-api-access-qgz4r\") on node \"crc\" DevicePath \"\"" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.931868 4877 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.931877 4877 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/27668d56-a427-4392-85d2-4e4cc52342aa-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.931886 4877 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.931895 4877 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.931904 4877 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.931912 4877 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.931921 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:43:25 crc kubenswrapper[4877]: I1211 18:43:25.955186 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-inventory" (OuterVolumeSpecName: "inventory") pod "27668d56-a427-4392-85d2-4e4cc52342aa" (UID: "27668d56-a427-4392-85d2-4e4cc52342aa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.033842 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27668d56-a427-4392-85d2-4e4cc52342aa-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.215585 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:43:26 crc kubenswrapper[4877]: E1211 18:43:26.216408 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.278284 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" event={"ID":"27668d56-a427-4392-85d2-4e4cc52342aa","Type":"ContainerDied","Data":"afaca4b71f476430a8c0c4be07638b24150d229290f881727720b19b746376af"} Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.278337 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afaca4b71f476430a8c0c4be07638b24150d229290f881727720b19b746376af" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.278804 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6b22p" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.417937 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq"] Dec 11 18:43:26 crc kubenswrapper[4877]: E1211 18:43:26.418331 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" containerName="extract-content" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.418351 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" containerName="extract-content" Dec 11 18:43:26 crc kubenswrapper[4877]: E1211 18:43:26.418402 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" containerName="registry-server" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.418408 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" containerName="registry-server" Dec 11 18:43:26 crc kubenswrapper[4877]: E1211 18:43:26.418416 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" containerName="extract-utilities" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.418422 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" containerName="extract-utilities" Dec 11 18:43:26 crc kubenswrapper[4877]: E1211 18:43:26.418451 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27668d56-a427-4392-85d2-4e4cc52342aa" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.418457 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="27668d56-a427-4392-85d2-4e4cc52342aa" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.418617 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="8824e39f-4b4b-4f34-9a4c-2f5f2c71d16d" containerName="registry-server" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.418648 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="27668d56-a427-4392-85d2-4e4cc52342aa" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.419441 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.422552 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.422562 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.423768 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.424286 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.428041 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gbg6h" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.448159 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.448268 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.448315 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.448684 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.448737 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.448772 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.448840 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dqvs\" (UniqueName: \"kubernetes.io/projected/352625c8-a275-44c6-9758-962aa05194b1-kube-api-access-4dqvs\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.457358 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq"] Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.550448 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dqvs\" (UniqueName: \"kubernetes.io/projected/352625c8-a275-44c6-9758-962aa05194b1-kube-api-access-4dqvs\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.550974 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.551077 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.551155 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.551305 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.551399 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.551492 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.555520 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.556188 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.556222 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.557362 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.557679 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.559230 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.568322 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dqvs\" (UniqueName: \"kubernetes.io/projected/352625c8-a275-44c6-9758-962aa05194b1-kube-api-access-4dqvs\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:26 crc kubenswrapper[4877]: I1211 18:43:26.747077 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:43:27 crc kubenswrapper[4877]: I1211 18:43:27.342911 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq"] Dec 11 18:43:27 crc kubenswrapper[4877]: I1211 18:43:27.363338 4877 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 18:43:28 crc kubenswrapper[4877]: I1211 18:43:28.300139 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" event={"ID":"352625c8-a275-44c6-9758-962aa05194b1","Type":"ContainerStarted","Data":"7ec1d7e96fd84b482b3be062dde7ecd995adb130e1d8b5211786b788c63b1ff9"} Dec 11 18:43:28 crc kubenswrapper[4877]: I1211 18:43:28.300673 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" event={"ID":"352625c8-a275-44c6-9758-962aa05194b1","Type":"ContainerStarted","Data":"38da260bd44523c7e18c59db9b4750ffdd414808e5b79ac4964ee2a9d457ede4"} Dec 11 18:43:28 crc kubenswrapper[4877]: I1211 18:43:28.330298 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" podStartSLOduration=1.8236729779999998 podStartE2EDuration="2.330272628s" podCreationTimestamp="2025-12-11 18:43:26 +0000 UTC" firstStartedPulling="2025-12-11 18:43:27.362957186 +0000 UTC m=+2568.389201250" lastFinishedPulling="2025-12-11 18:43:27.869556856 +0000 UTC m=+2568.895800900" observedRunningTime="2025-12-11 18:43:28.322392496 +0000 UTC m=+2569.348636560" watchObservedRunningTime="2025-12-11 18:43:28.330272628 +0000 UTC m=+2569.356516682" Dec 11 18:43:31 crc kubenswrapper[4877]: I1211 18:43:31.216221 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:43:31 crc kubenswrapper[4877]: E1211 18:43:31.216930 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:43:37 crc kubenswrapper[4877]: I1211 18:43:37.215285 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:43:37 crc kubenswrapper[4877]: E1211 18:43:37.216236 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:43:46 crc kubenswrapper[4877]: I1211 18:43:46.216572 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:43:47 crc kubenswrapper[4877]: I1211 18:43:47.517426 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerStarted","Data":"9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc"} Dec 11 18:43:47 crc kubenswrapper[4877]: I1211 18:43:47.518786 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:43:51 crc kubenswrapper[4877]: I1211 18:43:51.144156 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:43:51 crc kubenswrapper[4877]: I1211 18:43:51.215128 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:43:51 crc kubenswrapper[4877]: E1211 18:43:51.215391 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:44:04 crc kubenswrapper[4877]: I1211 18:44:04.216227 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:44:04 crc kubenswrapper[4877]: E1211 18:44:04.217458 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:44:15 crc kubenswrapper[4877]: I1211 18:44:15.215828 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:44:15 crc kubenswrapper[4877]: E1211 18:44:15.216914 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:44:29 crc kubenswrapper[4877]: I1211 18:44:29.229426 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:44:30 crc kubenswrapper[4877]: I1211 18:44:30.027880 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"77182d5734a42ce0c89a63409fd53002b3b12bfa707960009a1f5aa6b7b03940"} Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.173651 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt"] Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.175621 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.178699 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.179162 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.219010 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt"] Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.220242 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31a8d2cd-6d41-4f75-854d-65ea49e9517b-secret-volume\") pod \"collect-profiles-29424645-vk5lt\" (UID: \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.220319 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31a8d2cd-6d41-4f75-854d-65ea49e9517b-config-volume\") pod \"collect-profiles-29424645-vk5lt\" (UID: \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.220587 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt5pr\" (UniqueName: \"kubernetes.io/projected/31a8d2cd-6d41-4f75-854d-65ea49e9517b-kube-api-access-mt5pr\") pod \"collect-profiles-29424645-vk5lt\" (UID: \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.322165 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31a8d2cd-6d41-4f75-854d-65ea49e9517b-secret-volume\") pod \"collect-profiles-29424645-vk5lt\" (UID: \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.322286 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31a8d2cd-6d41-4f75-854d-65ea49e9517b-config-volume\") pod \"collect-profiles-29424645-vk5lt\" (UID: \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.322601 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt5pr\" (UniqueName: \"kubernetes.io/projected/31a8d2cd-6d41-4f75-854d-65ea49e9517b-kube-api-access-mt5pr\") pod \"collect-profiles-29424645-vk5lt\" (UID: \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.323865 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31a8d2cd-6d41-4f75-854d-65ea49e9517b-config-volume\") pod \"collect-profiles-29424645-vk5lt\" (UID: \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.337652 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31a8d2cd-6d41-4f75-854d-65ea49e9517b-secret-volume\") pod \"collect-profiles-29424645-vk5lt\" (UID: \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.357442 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt5pr\" (UniqueName: \"kubernetes.io/projected/31a8d2cd-6d41-4f75-854d-65ea49e9517b-kube-api-access-mt5pr\") pod \"collect-profiles-29424645-vk5lt\" (UID: \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.527621 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" Dec 11 18:45:00 crc kubenswrapper[4877]: I1211 18:45:00.989354 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt"] Dec 11 18:45:01 crc kubenswrapper[4877]: I1211 18:45:01.405840 4877 generic.go:334] "Generic (PLEG): container finished" podID="31a8d2cd-6d41-4f75-854d-65ea49e9517b" containerID="34d64620f6a6561ac56676cbd3eb180e41abe9dc8d9bb75428a33c4de7825900" exitCode=0 Dec 11 18:45:01 crc kubenswrapper[4877]: I1211 18:45:01.406048 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" event={"ID":"31a8d2cd-6d41-4f75-854d-65ea49e9517b","Type":"ContainerDied","Data":"34d64620f6a6561ac56676cbd3eb180e41abe9dc8d9bb75428a33c4de7825900"} Dec 11 18:45:01 crc kubenswrapper[4877]: I1211 18:45:01.406121 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" event={"ID":"31a8d2cd-6d41-4f75-854d-65ea49e9517b","Type":"ContainerStarted","Data":"bd382b0ef9c229b49624ff04b589eba2ed04b9fdd07774560110f294e22eb1b3"} Dec 11 18:45:02 crc kubenswrapper[4877]: I1211 18:45:02.873082 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" Dec 11 18:45:02 crc kubenswrapper[4877]: I1211 18:45:02.876530 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt5pr\" (UniqueName: \"kubernetes.io/projected/31a8d2cd-6d41-4f75-854d-65ea49e9517b-kube-api-access-mt5pr\") pod \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\" (UID: \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\") " Dec 11 18:45:02 crc kubenswrapper[4877]: I1211 18:45:02.883728 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a8d2cd-6d41-4f75-854d-65ea49e9517b-kube-api-access-mt5pr" (OuterVolumeSpecName: "kube-api-access-mt5pr") pod "31a8d2cd-6d41-4f75-854d-65ea49e9517b" (UID: "31a8d2cd-6d41-4f75-854d-65ea49e9517b"). InnerVolumeSpecName "kube-api-access-mt5pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:45:02 crc kubenswrapper[4877]: I1211 18:45:02.978020 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31a8d2cd-6d41-4f75-854d-65ea49e9517b-secret-volume\") pod \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\" (UID: \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\") " Dec 11 18:45:02 crc kubenswrapper[4877]: I1211 18:45:02.978154 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31a8d2cd-6d41-4f75-854d-65ea49e9517b-config-volume\") pod \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\" (UID: \"31a8d2cd-6d41-4f75-854d-65ea49e9517b\") " Dec 11 18:45:02 crc kubenswrapper[4877]: I1211 18:45:02.978758 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt5pr\" (UniqueName: \"kubernetes.io/projected/31a8d2cd-6d41-4f75-854d-65ea49e9517b-kube-api-access-mt5pr\") on node \"crc\" DevicePath \"\"" Dec 11 18:45:02 crc kubenswrapper[4877]: I1211 18:45:02.979253 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31a8d2cd-6d41-4f75-854d-65ea49e9517b-config-volume" (OuterVolumeSpecName: "config-volume") pod "31a8d2cd-6d41-4f75-854d-65ea49e9517b" (UID: "31a8d2cd-6d41-4f75-854d-65ea49e9517b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:45:02 crc kubenswrapper[4877]: I1211 18:45:02.983468 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31a8d2cd-6d41-4f75-854d-65ea49e9517b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "31a8d2cd-6d41-4f75-854d-65ea49e9517b" (UID: "31a8d2cd-6d41-4f75-854d-65ea49e9517b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:45:03 crc kubenswrapper[4877]: I1211 18:45:03.080635 4877 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31a8d2cd-6d41-4f75-854d-65ea49e9517b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 18:45:03 crc kubenswrapper[4877]: I1211 18:45:03.080675 4877 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31a8d2cd-6d41-4f75-854d-65ea49e9517b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 18:45:03 crc kubenswrapper[4877]: I1211 18:45:03.429787 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" event={"ID":"31a8d2cd-6d41-4f75-854d-65ea49e9517b","Type":"ContainerDied","Data":"bd382b0ef9c229b49624ff04b589eba2ed04b9fdd07774560110f294e22eb1b3"} Dec 11 18:45:03 crc kubenswrapper[4877]: I1211 18:45:03.429839 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd382b0ef9c229b49624ff04b589eba2ed04b9fdd07774560110f294e22eb1b3" Dec 11 18:45:03 crc kubenswrapper[4877]: I1211 18:45:03.429918 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424645-vk5lt" Dec 11 18:45:03 crc kubenswrapper[4877]: I1211 18:45:03.985951 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2"] Dec 11 18:45:04 crc kubenswrapper[4877]: I1211 18:45:04.002432 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424600-hpkv2"] Dec 11 18:45:05 crc kubenswrapper[4877]: I1211 18:45:05.233302 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b21482e-04d8-493d-a274-670ce4961923" path="/var/lib/kubelet/pods/2b21482e-04d8-493d-a274-670ce4961923/volumes" Dec 11 18:45:46 crc kubenswrapper[4877]: I1211 18:45:46.587562 4877 scope.go:117] "RemoveContainer" containerID="150baca428c2169e9173e557e7a5bd0ec016443dd6b0aa88006b1055e7af5ea3" Dec 11 18:46:07 crc kubenswrapper[4877]: I1211 18:46:07.201999 4877 generic.go:334] "Generic (PLEG): container finished" podID="352625c8-a275-44c6-9758-962aa05194b1" containerID="7ec1d7e96fd84b482b3be062dde7ecd995adb130e1d8b5211786b788c63b1ff9" exitCode=0 Dec 11 18:46:07 crc kubenswrapper[4877]: I1211 18:46:07.202065 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" event={"ID":"352625c8-a275-44c6-9758-962aa05194b1","Type":"ContainerDied","Data":"7ec1d7e96fd84b482b3be062dde7ecd995adb130e1d8b5211786b788c63b1ff9"} Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.710635 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.865397 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-0\") pod \"352625c8-a275-44c6-9758-962aa05194b1\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.865512 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dqvs\" (UniqueName: \"kubernetes.io/projected/352625c8-a275-44c6-9758-962aa05194b1-kube-api-access-4dqvs\") pod \"352625c8-a275-44c6-9758-962aa05194b1\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.865586 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ssh-key\") pod \"352625c8-a275-44c6-9758-962aa05194b1\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.865601 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-telemetry-combined-ca-bundle\") pod \"352625c8-a275-44c6-9758-962aa05194b1\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.865807 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-inventory\") pod \"352625c8-a275-44c6-9758-962aa05194b1\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.865840 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-2\") pod \"352625c8-a275-44c6-9758-962aa05194b1\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.865878 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-1\") pod \"352625c8-a275-44c6-9758-962aa05194b1\" (UID: \"352625c8-a275-44c6-9758-962aa05194b1\") " Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.903692 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "352625c8-a275-44c6-9758-962aa05194b1" (UID: "352625c8-a275-44c6-9758-962aa05194b1"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.906517 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/352625c8-a275-44c6-9758-962aa05194b1-kube-api-access-4dqvs" (OuterVolumeSpecName: "kube-api-access-4dqvs") pod "352625c8-a275-44c6-9758-962aa05194b1" (UID: "352625c8-a275-44c6-9758-962aa05194b1"). InnerVolumeSpecName "kube-api-access-4dqvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.923908 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "352625c8-a275-44c6-9758-962aa05194b1" (UID: "352625c8-a275-44c6-9758-962aa05194b1"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.940674 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "352625c8-a275-44c6-9758-962aa05194b1" (UID: "352625c8-a275-44c6-9758-962aa05194b1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.943832 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-inventory" (OuterVolumeSpecName: "inventory") pod "352625c8-a275-44c6-9758-962aa05194b1" (UID: "352625c8-a275-44c6-9758-962aa05194b1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.953641 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "352625c8-a275-44c6-9758-962aa05194b1" (UID: "352625c8-a275-44c6-9758-962aa05194b1"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.965456 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "352625c8-a275-44c6-9758-962aa05194b1" (UID: "352625c8-a275-44c6-9758-962aa05194b1"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.968843 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dqvs\" (UniqueName: \"kubernetes.io/projected/352625c8-a275-44c6-9758-962aa05194b1-kube-api-access-4dqvs\") on node \"crc\" DevicePath \"\"" Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.968891 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.968907 4877 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.968921 4877 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.968932 4877 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.968948 4877 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 11 18:46:08 crc kubenswrapper[4877]: I1211 18:46:08.968960 4877 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/352625c8-a275-44c6-9758-962aa05194b1-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 11 18:46:09 crc kubenswrapper[4877]: I1211 18:46:09.226136 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" Dec 11 18:46:09 crc kubenswrapper[4877]: I1211 18:46:09.228914 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq" event={"ID":"352625c8-a275-44c6-9758-962aa05194b1","Type":"ContainerDied","Data":"38da260bd44523c7e18c59db9b4750ffdd414808e5b79ac4964ee2a9d457ede4"} Dec 11 18:46:09 crc kubenswrapper[4877]: I1211 18:46:09.228999 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38da260bd44523c7e18c59db9b4750ffdd414808e5b79ac4964ee2a9d457ede4" Dec 11 18:46:19 crc kubenswrapper[4877]: I1211 18:46:19.370696 4877 generic.go:334] "Generic (PLEG): container finished" podID="53a860ae-4169-4f47-8ba7-032c96b4be3a" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" exitCode=1 Dec 11 18:46:19 crc kubenswrapper[4877]: I1211 18:46:19.370786 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerDied","Data":"9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc"} Dec 11 18:46:19 crc kubenswrapper[4877]: I1211 18:46:19.371307 4877 scope.go:117] "RemoveContainer" containerID="2e3a9fa3a16cd92c38789eb487df6e295eb7ff928de572fa66ab2d18d89741a9" Dec 11 18:46:19 crc kubenswrapper[4877]: I1211 18:46:19.372485 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:46:19 crc kubenswrapper[4877]: E1211 18:46:19.374118 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:46:21 crc kubenswrapper[4877]: I1211 18:46:21.137154 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:46:21 crc kubenswrapper[4877]: I1211 18:46:21.138142 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:46:21 crc kubenswrapper[4877]: E1211 18:46:21.138361 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:46:21 crc kubenswrapper[4877]: I1211 18:46:21.138530 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:46:21 crc kubenswrapper[4877]: I1211 18:46:21.396351 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:46:21 crc kubenswrapper[4877]: E1211 18:46:21.397678 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:46:36 crc kubenswrapper[4877]: I1211 18:46:36.215712 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:46:36 crc kubenswrapper[4877]: E1211 18:46:36.217396 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:46:46 crc kubenswrapper[4877]: I1211 18:46:46.638524 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:46:46 crc kubenswrapper[4877]: I1211 18:46:46.639223 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:46:51 crc kubenswrapper[4877]: I1211 18:46:51.217324 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:46:51 crc kubenswrapper[4877]: E1211 18:46:51.220572 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.605073 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 11 18:46:53 crc kubenswrapper[4877]: E1211 18:46:53.606022 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a8d2cd-6d41-4f75-854d-65ea49e9517b" containerName="collect-profiles" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.606041 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a8d2cd-6d41-4f75-854d-65ea49e9517b" containerName="collect-profiles" Dec 11 18:46:53 crc kubenswrapper[4877]: E1211 18:46:53.606065 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="352625c8-a275-44c6-9758-962aa05194b1" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.606075 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="352625c8-a275-44c6-9758-962aa05194b1" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.606308 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a8d2cd-6d41-4f75-854d-65ea49e9517b" containerName="collect-profiles" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.606327 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="352625c8-a275-44c6-9758-962aa05194b1" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.607104 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.610985 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-l4jxc" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.611071 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.611482 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.611507 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.617334 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.694270 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.694865 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.694965 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/60eefae6-1396-4f0a-b52a-7827dca29fb3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.695018 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.695103 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.695319 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/60eefae6-1396-4f0a-b52a-7827dca29fb3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.695657 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60eefae6-1396-4f0a-b52a-7827dca29fb3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.695690 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbsmn\" (UniqueName: \"kubernetes.io/projected/60eefae6-1396-4f0a-b52a-7827dca29fb3-kube-api-access-jbsmn\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.696638 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60eefae6-1396-4f0a-b52a-7827dca29fb3-config-data\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.805045 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/60eefae6-1396-4f0a-b52a-7827dca29fb3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.805236 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60eefae6-1396-4f0a-b52a-7827dca29fb3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.805268 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbsmn\" (UniqueName: \"kubernetes.io/projected/60eefae6-1396-4f0a-b52a-7827dca29fb3-kube-api-access-jbsmn\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.805480 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60eefae6-1396-4f0a-b52a-7827dca29fb3-config-data\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.805515 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.805543 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.805595 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/60eefae6-1396-4f0a-b52a-7827dca29fb3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.805625 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.805660 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.806262 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.806491 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/60eefae6-1396-4f0a-b52a-7827dca29fb3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.806816 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60eefae6-1396-4f0a-b52a-7827dca29fb3-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.807284 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60eefae6-1396-4f0a-b52a-7827dca29fb3-config-data\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.808757 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/60eefae6-1396-4f0a-b52a-7827dca29fb3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.816077 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.821668 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.821882 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.824203 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbsmn\" (UniqueName: \"kubernetes.io/projected/60eefae6-1396-4f0a-b52a-7827dca29fb3-kube-api-access-jbsmn\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.862337 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " pod="openstack/tempest-tests-tempest" Dec 11 18:46:53 crc kubenswrapper[4877]: I1211 18:46:53.939954 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 11 18:46:54 crc kubenswrapper[4877]: I1211 18:46:54.454167 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 11 18:46:54 crc kubenswrapper[4877]: I1211 18:46:54.811201 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"60eefae6-1396-4f0a-b52a-7827dca29fb3","Type":"ContainerStarted","Data":"7a8b3d44864bd441a32c816463785903247befca0c0505e470e10c5b18d071bf"} Dec 11 18:47:02 crc kubenswrapper[4877]: I1211 18:47:02.214971 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:47:02 crc kubenswrapper[4877]: E1211 18:47:02.217784 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:47:14 crc kubenswrapper[4877]: I1211 18:47:14.215865 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:47:14 crc kubenswrapper[4877]: E1211 18:47:14.216798 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:47:16 crc kubenswrapper[4877]: I1211 18:47:16.638525 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:47:16 crc kubenswrapper[4877]: I1211 18:47:16.639678 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:47:25 crc kubenswrapper[4877]: I1211 18:47:25.216056 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:47:25 crc kubenswrapper[4877]: E1211 18:47:25.217184 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:47:28 crc kubenswrapper[4877]: E1211 18:47:28.456239 4877 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 11 18:47:28 crc kubenswrapper[4877]: E1211 18:47:28.456707 4877 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbsmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(60eefae6-1396-4f0a-b52a-7827dca29fb3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 18:47:28 crc kubenswrapper[4877]: E1211 18:47:28.457872 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="60eefae6-1396-4f0a-b52a-7827dca29fb3" Dec 11 18:47:29 crc kubenswrapper[4877]: E1211 18:47:29.157654 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="60eefae6-1396-4f0a-b52a-7827dca29fb3" Dec 11 18:47:40 crc kubenswrapper[4877]: I1211 18:47:40.215782 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:47:40 crc kubenswrapper[4877]: E1211 18:47:40.217846 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:47:40 crc kubenswrapper[4877]: I1211 18:47:40.760928 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 11 18:47:42 crc kubenswrapper[4877]: I1211 18:47:42.308939 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"60eefae6-1396-4f0a-b52a-7827dca29fb3","Type":"ContainerStarted","Data":"a818848cc28e52eea770c08cd896cf599513ec3ddad5637dd5710a54a63d47d7"} Dec 11 18:47:42 crc kubenswrapper[4877]: I1211 18:47:42.332854 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.037420487 podStartE2EDuration="50.332832699s" podCreationTimestamp="2025-12-11 18:46:52 +0000 UTC" firstStartedPulling="2025-12-11 18:46:54.461408208 +0000 UTC m=+2775.487652262" lastFinishedPulling="2025-12-11 18:47:40.75682042 +0000 UTC m=+2821.783064474" observedRunningTime="2025-12-11 18:47:42.328727268 +0000 UTC m=+2823.354971392" watchObservedRunningTime="2025-12-11 18:47:42.332832699 +0000 UTC m=+2823.359076773" Dec 11 18:47:46 crc kubenswrapper[4877]: I1211 18:47:46.637665 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:47:46 crc kubenswrapper[4877]: I1211 18:47:46.638439 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:47:46 crc kubenswrapper[4877]: I1211 18:47:46.638531 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:47:46 crc kubenswrapper[4877]: I1211 18:47:46.639800 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77182d5734a42ce0c89a63409fd53002b3b12bfa707960009a1f5aa6b7b03940"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 18:47:46 crc kubenswrapper[4877]: I1211 18:47:46.639931 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://77182d5734a42ce0c89a63409fd53002b3b12bfa707960009a1f5aa6b7b03940" gracePeriod=600 Dec 11 18:47:47 crc kubenswrapper[4877]: I1211 18:47:47.366531 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="77182d5734a42ce0c89a63409fd53002b3b12bfa707960009a1f5aa6b7b03940" exitCode=0 Dec 11 18:47:47 crc kubenswrapper[4877]: I1211 18:47:47.366634 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"77182d5734a42ce0c89a63409fd53002b3b12bfa707960009a1f5aa6b7b03940"} Dec 11 18:47:47 crc kubenswrapper[4877]: I1211 18:47:47.366968 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d"} Dec 11 18:47:47 crc kubenswrapper[4877]: I1211 18:47:47.367012 4877 scope.go:117] "RemoveContainer" containerID="2d8cb1b0dde25e2858884cb5fa2cdf4d9a04cba75657d65f1d809b142eb11627" Dec 11 18:47:51 crc kubenswrapper[4877]: I1211 18:47:51.216027 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:47:51 crc kubenswrapper[4877]: E1211 18:47:51.217092 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:48:03 crc kubenswrapper[4877]: I1211 18:48:03.216134 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:48:03 crc kubenswrapper[4877]: E1211 18:48:03.217145 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:48:17 crc kubenswrapper[4877]: I1211 18:48:17.216027 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:48:17 crc kubenswrapper[4877]: E1211 18:48:17.217240 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:48:31 crc kubenswrapper[4877]: I1211 18:48:31.216068 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:48:31 crc kubenswrapper[4877]: E1211 18:48:31.217621 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:48:42 crc kubenswrapper[4877]: I1211 18:48:42.216076 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:48:42 crc kubenswrapper[4877]: E1211 18:48:42.218753 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:48:54 crc kubenswrapper[4877]: I1211 18:48:54.215888 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:48:54 crc kubenswrapper[4877]: E1211 18:48:54.217219 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:49:02 crc kubenswrapper[4877]: I1211 18:49:02.047107 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zdrkl"] Dec 11 18:49:02 crc kubenswrapper[4877]: I1211 18:49:02.054015 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:02 crc kubenswrapper[4877]: I1211 18:49:02.112092 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zdrkl"] Dec 11 18:49:02 crc kubenswrapper[4877]: I1211 18:49:02.216683 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbnbg\" (UniqueName: \"kubernetes.io/projected/ddb9902c-3299-40f0-9d18-b8e69590f11c-kube-api-access-sbnbg\") pod \"community-operators-zdrkl\" (UID: \"ddb9902c-3299-40f0-9d18-b8e69590f11c\") " pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:02 crc kubenswrapper[4877]: I1211 18:49:02.216755 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb9902c-3299-40f0-9d18-b8e69590f11c-catalog-content\") pod \"community-operators-zdrkl\" (UID: \"ddb9902c-3299-40f0-9d18-b8e69590f11c\") " pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:02 crc kubenswrapper[4877]: I1211 18:49:02.216817 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb9902c-3299-40f0-9d18-b8e69590f11c-utilities\") pod \"community-operators-zdrkl\" (UID: \"ddb9902c-3299-40f0-9d18-b8e69590f11c\") " pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:02 crc kubenswrapper[4877]: I1211 18:49:02.318605 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb9902c-3299-40f0-9d18-b8e69590f11c-utilities\") pod \"community-operators-zdrkl\" (UID: \"ddb9902c-3299-40f0-9d18-b8e69590f11c\") " pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:02 crc kubenswrapper[4877]: I1211 18:49:02.318723 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbnbg\" (UniqueName: \"kubernetes.io/projected/ddb9902c-3299-40f0-9d18-b8e69590f11c-kube-api-access-sbnbg\") pod \"community-operators-zdrkl\" (UID: \"ddb9902c-3299-40f0-9d18-b8e69590f11c\") " pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:02 crc kubenswrapper[4877]: I1211 18:49:02.318772 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb9902c-3299-40f0-9d18-b8e69590f11c-catalog-content\") pod \"community-operators-zdrkl\" (UID: \"ddb9902c-3299-40f0-9d18-b8e69590f11c\") " pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:02 crc kubenswrapper[4877]: I1211 18:49:02.319297 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb9902c-3299-40f0-9d18-b8e69590f11c-utilities\") pod \"community-operators-zdrkl\" (UID: \"ddb9902c-3299-40f0-9d18-b8e69590f11c\") " pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:02 crc kubenswrapper[4877]: I1211 18:49:02.319306 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb9902c-3299-40f0-9d18-b8e69590f11c-catalog-content\") pod \"community-operators-zdrkl\" (UID: \"ddb9902c-3299-40f0-9d18-b8e69590f11c\") " pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:02 crc kubenswrapper[4877]: I1211 18:49:02.353015 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbnbg\" (UniqueName: \"kubernetes.io/projected/ddb9902c-3299-40f0-9d18-b8e69590f11c-kube-api-access-sbnbg\") pod \"community-operators-zdrkl\" (UID: \"ddb9902c-3299-40f0-9d18-b8e69590f11c\") " pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:02 crc kubenswrapper[4877]: I1211 18:49:02.402118 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:02 crc kubenswrapper[4877]: I1211 18:49:02.924288 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zdrkl"] Dec 11 18:49:03 crc kubenswrapper[4877]: I1211 18:49:03.244753 4877 generic.go:334] "Generic (PLEG): container finished" podID="ddb9902c-3299-40f0-9d18-b8e69590f11c" containerID="9d33e1b1c6b347b7f94f029528ab483e6bf6b3ed7e698c94c0ef3291876b51ac" exitCode=0 Dec 11 18:49:03 crc kubenswrapper[4877]: I1211 18:49:03.244796 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdrkl" event={"ID":"ddb9902c-3299-40f0-9d18-b8e69590f11c","Type":"ContainerDied","Data":"9d33e1b1c6b347b7f94f029528ab483e6bf6b3ed7e698c94c0ef3291876b51ac"} Dec 11 18:49:03 crc kubenswrapper[4877]: I1211 18:49:03.244821 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdrkl" event={"ID":"ddb9902c-3299-40f0-9d18-b8e69590f11c","Type":"ContainerStarted","Data":"fed8bf6e1527560378cb91c67475b5856558dc6f96d0e75907e53f37c6aef267"} Dec 11 18:49:03 crc kubenswrapper[4877]: I1211 18:49:03.247538 4877 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 18:49:05 crc kubenswrapper[4877]: I1211 18:49:05.278168 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdrkl" event={"ID":"ddb9902c-3299-40f0-9d18-b8e69590f11c","Type":"ContainerStarted","Data":"b627594b9cb8cfce884d1d6885aaec4b69fe0a37dba2a1db709b9d990d6fe887"} Dec 11 18:49:06 crc kubenswrapper[4877]: I1211 18:49:06.215028 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:49:06 crc kubenswrapper[4877]: E1211 18:49:06.215686 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:49:06 crc kubenswrapper[4877]: I1211 18:49:06.295066 4877 generic.go:334] "Generic (PLEG): container finished" podID="ddb9902c-3299-40f0-9d18-b8e69590f11c" containerID="b627594b9cb8cfce884d1d6885aaec4b69fe0a37dba2a1db709b9d990d6fe887" exitCode=0 Dec 11 18:49:06 crc kubenswrapper[4877]: I1211 18:49:06.295126 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdrkl" event={"ID":"ddb9902c-3299-40f0-9d18-b8e69590f11c","Type":"ContainerDied","Data":"b627594b9cb8cfce884d1d6885aaec4b69fe0a37dba2a1db709b9d990d6fe887"} Dec 11 18:49:08 crc kubenswrapper[4877]: I1211 18:49:08.320288 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdrkl" event={"ID":"ddb9902c-3299-40f0-9d18-b8e69590f11c","Type":"ContainerStarted","Data":"7da08a1859d25d36af8bff951418ad5019297b0e4a6e4de41e677639ae9b2d7d"} Dec 11 18:49:08 crc kubenswrapper[4877]: I1211 18:49:08.361539 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zdrkl" podStartSLOduration=2.233149075 podStartE2EDuration="6.361512194s" podCreationTimestamp="2025-12-11 18:49:02 +0000 UTC" firstStartedPulling="2025-12-11 18:49:03.246949439 +0000 UTC m=+2904.273193483" lastFinishedPulling="2025-12-11 18:49:07.375312538 +0000 UTC m=+2908.401556602" observedRunningTime="2025-12-11 18:49:08.346632813 +0000 UTC m=+2909.372876897" watchObservedRunningTime="2025-12-11 18:49:08.361512194 +0000 UTC m=+2909.387756288" Dec 11 18:49:12 crc kubenswrapper[4877]: I1211 18:49:12.402772 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:12 crc kubenswrapper[4877]: I1211 18:49:12.403362 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:12 crc kubenswrapper[4877]: I1211 18:49:12.465690 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:13 crc kubenswrapper[4877]: I1211 18:49:13.467511 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:13 crc kubenswrapper[4877]: I1211 18:49:13.545491 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zdrkl"] Dec 11 18:49:15 crc kubenswrapper[4877]: I1211 18:49:15.393477 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zdrkl" podUID="ddb9902c-3299-40f0-9d18-b8e69590f11c" containerName="registry-server" containerID="cri-o://7da08a1859d25d36af8bff951418ad5019297b0e4a6e4de41e677639ae9b2d7d" gracePeriod=2 Dec 11 18:49:15 crc kubenswrapper[4877]: I1211 18:49:15.998248 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.169095 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbnbg\" (UniqueName: \"kubernetes.io/projected/ddb9902c-3299-40f0-9d18-b8e69590f11c-kube-api-access-sbnbg\") pod \"ddb9902c-3299-40f0-9d18-b8e69590f11c\" (UID: \"ddb9902c-3299-40f0-9d18-b8e69590f11c\") " Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.169222 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb9902c-3299-40f0-9d18-b8e69590f11c-utilities\") pod \"ddb9902c-3299-40f0-9d18-b8e69590f11c\" (UID: \"ddb9902c-3299-40f0-9d18-b8e69590f11c\") " Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.169351 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb9902c-3299-40f0-9d18-b8e69590f11c-catalog-content\") pod \"ddb9902c-3299-40f0-9d18-b8e69590f11c\" (UID: \"ddb9902c-3299-40f0-9d18-b8e69590f11c\") " Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.170418 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddb9902c-3299-40f0-9d18-b8e69590f11c-utilities" (OuterVolumeSpecName: "utilities") pod "ddb9902c-3299-40f0-9d18-b8e69590f11c" (UID: "ddb9902c-3299-40f0-9d18-b8e69590f11c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.177110 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb9902c-3299-40f0-9d18-b8e69590f11c-kube-api-access-sbnbg" (OuterVolumeSpecName: "kube-api-access-sbnbg") pod "ddb9902c-3299-40f0-9d18-b8e69590f11c" (UID: "ddb9902c-3299-40f0-9d18-b8e69590f11c"). InnerVolumeSpecName "kube-api-access-sbnbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.240226 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddb9902c-3299-40f0-9d18-b8e69590f11c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddb9902c-3299-40f0-9d18-b8e69590f11c" (UID: "ddb9902c-3299-40f0-9d18-b8e69590f11c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.273111 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbnbg\" (UniqueName: \"kubernetes.io/projected/ddb9902c-3299-40f0-9d18-b8e69590f11c-kube-api-access-sbnbg\") on node \"crc\" DevicePath \"\"" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.273148 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb9902c-3299-40f0-9d18-b8e69590f11c-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.273161 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb9902c-3299-40f0-9d18-b8e69590f11c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.407116 4877 generic.go:334] "Generic (PLEG): container finished" podID="ddb9902c-3299-40f0-9d18-b8e69590f11c" containerID="7da08a1859d25d36af8bff951418ad5019297b0e4a6e4de41e677639ae9b2d7d" exitCode=0 Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.407166 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdrkl" event={"ID":"ddb9902c-3299-40f0-9d18-b8e69590f11c","Type":"ContainerDied","Data":"7da08a1859d25d36af8bff951418ad5019297b0e4a6e4de41e677639ae9b2d7d"} Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.407516 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdrkl" event={"ID":"ddb9902c-3299-40f0-9d18-b8e69590f11c","Type":"ContainerDied","Data":"fed8bf6e1527560378cb91c67475b5856558dc6f96d0e75907e53f37c6aef267"} Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.407541 4877 scope.go:117] "RemoveContainer" containerID="7da08a1859d25d36af8bff951418ad5019297b0e4a6e4de41e677639ae9b2d7d" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.407213 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdrkl" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.437769 4877 scope.go:117] "RemoveContainer" containerID="b627594b9cb8cfce884d1d6885aaec4b69fe0a37dba2a1db709b9d990d6fe887" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.443216 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zdrkl"] Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.460411 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zdrkl"] Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.465458 4877 scope.go:117] "RemoveContainer" containerID="9d33e1b1c6b347b7f94f029528ab483e6bf6b3ed7e698c94c0ef3291876b51ac" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.528635 4877 scope.go:117] "RemoveContainer" containerID="7da08a1859d25d36af8bff951418ad5019297b0e4a6e4de41e677639ae9b2d7d" Dec 11 18:49:16 crc kubenswrapper[4877]: E1211 18:49:16.529111 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7da08a1859d25d36af8bff951418ad5019297b0e4a6e4de41e677639ae9b2d7d\": container with ID starting with 7da08a1859d25d36af8bff951418ad5019297b0e4a6e4de41e677639ae9b2d7d not found: ID does not exist" containerID="7da08a1859d25d36af8bff951418ad5019297b0e4a6e4de41e677639ae9b2d7d" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.529289 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da08a1859d25d36af8bff951418ad5019297b0e4a6e4de41e677639ae9b2d7d"} err="failed to get container status \"7da08a1859d25d36af8bff951418ad5019297b0e4a6e4de41e677639ae9b2d7d\": rpc error: code = NotFound desc = could not find container \"7da08a1859d25d36af8bff951418ad5019297b0e4a6e4de41e677639ae9b2d7d\": container with ID starting with 7da08a1859d25d36af8bff951418ad5019297b0e4a6e4de41e677639ae9b2d7d not found: ID does not exist" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.529455 4877 scope.go:117] "RemoveContainer" containerID="b627594b9cb8cfce884d1d6885aaec4b69fe0a37dba2a1db709b9d990d6fe887" Dec 11 18:49:16 crc kubenswrapper[4877]: E1211 18:49:16.530151 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b627594b9cb8cfce884d1d6885aaec4b69fe0a37dba2a1db709b9d990d6fe887\": container with ID starting with b627594b9cb8cfce884d1d6885aaec4b69fe0a37dba2a1db709b9d990d6fe887 not found: ID does not exist" containerID="b627594b9cb8cfce884d1d6885aaec4b69fe0a37dba2a1db709b9d990d6fe887" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.530183 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b627594b9cb8cfce884d1d6885aaec4b69fe0a37dba2a1db709b9d990d6fe887"} err="failed to get container status \"b627594b9cb8cfce884d1d6885aaec4b69fe0a37dba2a1db709b9d990d6fe887\": rpc error: code = NotFound desc = could not find container \"b627594b9cb8cfce884d1d6885aaec4b69fe0a37dba2a1db709b9d990d6fe887\": container with ID starting with b627594b9cb8cfce884d1d6885aaec4b69fe0a37dba2a1db709b9d990d6fe887 not found: ID does not exist" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.530206 4877 scope.go:117] "RemoveContainer" containerID="9d33e1b1c6b347b7f94f029528ab483e6bf6b3ed7e698c94c0ef3291876b51ac" Dec 11 18:49:16 crc kubenswrapper[4877]: E1211 18:49:16.530831 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d33e1b1c6b347b7f94f029528ab483e6bf6b3ed7e698c94c0ef3291876b51ac\": container with ID starting with 9d33e1b1c6b347b7f94f029528ab483e6bf6b3ed7e698c94c0ef3291876b51ac not found: ID does not exist" containerID="9d33e1b1c6b347b7f94f029528ab483e6bf6b3ed7e698c94c0ef3291876b51ac" Dec 11 18:49:16 crc kubenswrapper[4877]: I1211 18:49:16.530884 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d33e1b1c6b347b7f94f029528ab483e6bf6b3ed7e698c94c0ef3291876b51ac"} err="failed to get container status \"9d33e1b1c6b347b7f94f029528ab483e6bf6b3ed7e698c94c0ef3291876b51ac\": rpc error: code = NotFound desc = could not find container \"9d33e1b1c6b347b7f94f029528ab483e6bf6b3ed7e698c94c0ef3291876b51ac\": container with ID starting with 9d33e1b1c6b347b7f94f029528ab483e6bf6b3ed7e698c94c0ef3291876b51ac not found: ID does not exist" Dec 11 18:49:17 crc kubenswrapper[4877]: I1211 18:49:17.239554 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddb9902c-3299-40f0-9d18-b8e69590f11c" path="/var/lib/kubelet/pods/ddb9902c-3299-40f0-9d18-b8e69590f11c/volumes" Dec 11 18:49:21 crc kubenswrapper[4877]: I1211 18:49:21.215729 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:49:21 crc kubenswrapper[4877]: E1211 18:49:21.217036 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:49:33 crc kubenswrapper[4877]: I1211 18:49:33.215840 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:49:33 crc kubenswrapper[4877]: E1211 18:49:33.216909 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.179041 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q8kgx"] Dec 11 18:49:34 crc kubenswrapper[4877]: E1211 18:49:34.179763 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb9902c-3299-40f0-9d18-b8e69590f11c" containerName="registry-server" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.179792 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb9902c-3299-40f0-9d18-b8e69590f11c" containerName="registry-server" Dec 11 18:49:34 crc kubenswrapper[4877]: E1211 18:49:34.179812 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb9902c-3299-40f0-9d18-b8e69590f11c" containerName="extract-content" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.179822 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb9902c-3299-40f0-9d18-b8e69590f11c" containerName="extract-content" Dec 11 18:49:34 crc kubenswrapper[4877]: E1211 18:49:34.179847 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb9902c-3299-40f0-9d18-b8e69590f11c" containerName="extract-utilities" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.179858 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb9902c-3299-40f0-9d18-b8e69590f11c" containerName="extract-utilities" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.180045 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb9902c-3299-40f0-9d18-b8e69590f11c" containerName="registry-server" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.181469 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.224591 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8kgx"] Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.364147 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c98w\" (UniqueName: \"kubernetes.io/projected/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-kube-api-access-4c98w\") pod \"redhat-marketplace-q8kgx\" (UID: \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\") " pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.364291 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-utilities\") pod \"redhat-marketplace-q8kgx\" (UID: \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\") " pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.364437 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-catalog-content\") pod \"redhat-marketplace-q8kgx\" (UID: \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\") " pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.466418 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-utilities\") pod \"redhat-marketplace-q8kgx\" (UID: \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\") " pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.466504 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-catalog-content\") pod \"redhat-marketplace-q8kgx\" (UID: \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\") " pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.466615 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c98w\" (UniqueName: \"kubernetes.io/projected/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-kube-api-access-4c98w\") pod \"redhat-marketplace-q8kgx\" (UID: \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\") " pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.466867 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-utilities\") pod \"redhat-marketplace-q8kgx\" (UID: \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\") " pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.467061 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-catalog-content\") pod \"redhat-marketplace-q8kgx\" (UID: \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\") " pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.485112 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c98w\" (UniqueName: \"kubernetes.io/projected/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-kube-api-access-4c98w\") pod \"redhat-marketplace-q8kgx\" (UID: \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\") " pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.500281 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:34 crc kubenswrapper[4877]: I1211 18:49:34.950862 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8kgx"] Dec 11 18:49:35 crc kubenswrapper[4877]: I1211 18:49:35.624890 4877 generic.go:334] "Generic (PLEG): container finished" podID="5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" containerID="1696a43c1bb72b7be29653c7d658c04bb36258ab92df715cca688cf53df57c3c" exitCode=0 Dec 11 18:49:35 crc kubenswrapper[4877]: I1211 18:49:35.624969 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8kgx" event={"ID":"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a","Type":"ContainerDied","Data":"1696a43c1bb72b7be29653c7d658c04bb36258ab92df715cca688cf53df57c3c"} Dec 11 18:49:35 crc kubenswrapper[4877]: I1211 18:49:35.625176 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8kgx" event={"ID":"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a","Type":"ContainerStarted","Data":"1338b741abe336cc79ca842bfe11f249ff5b6cbeafc57582f9c951c1426af849"} Dec 11 18:49:37 crc kubenswrapper[4877]: E1211 18:49:37.168161 4877 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e1b81bd_40e4_4ce0_821f_dc8fe8a9637a.slice/crio-conmon-44b928084221622877cfae3b1ac83320ccd6a7a83e1bc3c4d688c427820e9819.scope\": RecentStats: unable to find data in memory cache]" Dec 11 18:49:37 crc kubenswrapper[4877]: I1211 18:49:37.653973 4877 generic.go:334] "Generic (PLEG): container finished" podID="5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" containerID="44b928084221622877cfae3b1ac83320ccd6a7a83e1bc3c4d688c427820e9819" exitCode=0 Dec 11 18:49:37 crc kubenswrapper[4877]: I1211 18:49:37.654051 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8kgx" event={"ID":"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a","Type":"ContainerDied","Data":"44b928084221622877cfae3b1ac83320ccd6a7a83e1bc3c4d688c427820e9819"} Dec 11 18:49:38 crc kubenswrapper[4877]: I1211 18:49:38.666239 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8kgx" event={"ID":"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a","Type":"ContainerStarted","Data":"e42bd7f6a7e0e3bc2ac6bc506844810cfa37b5a55756ef05246933915e1f9712"} Dec 11 18:49:38 crc kubenswrapper[4877]: I1211 18:49:38.699468 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q8kgx" podStartSLOduration=1.979805077 podStartE2EDuration="4.6994467s" podCreationTimestamp="2025-12-11 18:49:34 +0000 UTC" firstStartedPulling="2025-12-11 18:49:35.628120869 +0000 UTC m=+2936.654364943" lastFinishedPulling="2025-12-11 18:49:38.347762512 +0000 UTC m=+2939.374006566" observedRunningTime="2025-12-11 18:49:38.688681679 +0000 UTC m=+2939.714925753" watchObservedRunningTime="2025-12-11 18:49:38.6994467 +0000 UTC m=+2939.725690754" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.215770 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:49:44 crc kubenswrapper[4877]: E1211 18:49:44.217437 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.262401 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2k25c"] Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.265141 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.276674 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2k25c"] Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.371792 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4q7c\" (UniqueName: \"kubernetes.io/projected/fbba94b5-69fd-4956-b1f7-8e397630c287-kube-api-access-j4q7c\") pod \"redhat-operators-2k25c\" (UID: \"fbba94b5-69fd-4956-b1f7-8e397630c287\") " pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.371867 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbba94b5-69fd-4956-b1f7-8e397630c287-catalog-content\") pod \"redhat-operators-2k25c\" (UID: \"fbba94b5-69fd-4956-b1f7-8e397630c287\") " pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.372044 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbba94b5-69fd-4956-b1f7-8e397630c287-utilities\") pod \"redhat-operators-2k25c\" (UID: \"fbba94b5-69fd-4956-b1f7-8e397630c287\") " pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.473552 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbba94b5-69fd-4956-b1f7-8e397630c287-utilities\") pod \"redhat-operators-2k25c\" (UID: \"fbba94b5-69fd-4956-b1f7-8e397630c287\") " pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.473675 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4q7c\" (UniqueName: \"kubernetes.io/projected/fbba94b5-69fd-4956-b1f7-8e397630c287-kube-api-access-j4q7c\") pod \"redhat-operators-2k25c\" (UID: \"fbba94b5-69fd-4956-b1f7-8e397630c287\") " pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.473711 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbba94b5-69fd-4956-b1f7-8e397630c287-catalog-content\") pod \"redhat-operators-2k25c\" (UID: \"fbba94b5-69fd-4956-b1f7-8e397630c287\") " pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.474309 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbba94b5-69fd-4956-b1f7-8e397630c287-utilities\") pod \"redhat-operators-2k25c\" (UID: \"fbba94b5-69fd-4956-b1f7-8e397630c287\") " pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.474354 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbba94b5-69fd-4956-b1f7-8e397630c287-catalog-content\") pod \"redhat-operators-2k25c\" (UID: \"fbba94b5-69fd-4956-b1f7-8e397630c287\") " pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.500436 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.500691 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.504067 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4q7c\" (UniqueName: \"kubernetes.io/projected/fbba94b5-69fd-4956-b1f7-8e397630c287-kube-api-access-j4q7c\") pod \"redhat-operators-2k25c\" (UID: \"fbba94b5-69fd-4956-b1f7-8e397630c287\") " pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.574467 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.594980 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:49:44 crc kubenswrapper[4877]: I1211 18:49:44.795852 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:45 crc kubenswrapper[4877]: I1211 18:49:45.071208 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2k25c"] Dec 11 18:49:45 crc kubenswrapper[4877]: I1211 18:49:45.734904 4877 generic.go:334] "Generic (PLEG): container finished" podID="fbba94b5-69fd-4956-b1f7-8e397630c287" containerID="aa0ab8b90481ed5be7c683945f54a9042237378e86f2c0de153ea9100a644606" exitCode=0 Dec 11 18:49:45 crc kubenswrapper[4877]: I1211 18:49:45.736493 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2k25c" event={"ID":"fbba94b5-69fd-4956-b1f7-8e397630c287","Type":"ContainerDied","Data":"aa0ab8b90481ed5be7c683945f54a9042237378e86f2c0de153ea9100a644606"} Dec 11 18:49:45 crc kubenswrapper[4877]: I1211 18:49:45.736521 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2k25c" event={"ID":"fbba94b5-69fd-4956-b1f7-8e397630c287","Type":"ContainerStarted","Data":"1afb7ac69c27be4c9d1e2549402f931e53d93becaa8f70e0c025137727815a33"} Dec 11 18:49:46 crc kubenswrapper[4877]: I1211 18:49:46.835561 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8kgx"] Dec 11 18:49:46 crc kubenswrapper[4877]: I1211 18:49:46.835968 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q8kgx" podUID="5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" containerName="registry-server" containerID="cri-o://e42bd7f6a7e0e3bc2ac6bc506844810cfa37b5a55756ef05246933915e1f9712" gracePeriod=2 Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.506834 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.630842 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c98w\" (UniqueName: \"kubernetes.io/projected/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-kube-api-access-4c98w\") pod \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\" (UID: \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\") " Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.631006 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-catalog-content\") pod \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\" (UID: \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\") " Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.631072 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-utilities\") pod \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\" (UID: \"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a\") " Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.631847 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-utilities" (OuterVolumeSpecName: "utilities") pod "5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" (UID: "5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.638362 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-kube-api-access-4c98w" (OuterVolumeSpecName: "kube-api-access-4c98w") pod "5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" (UID: "5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a"). InnerVolumeSpecName "kube-api-access-4c98w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.658519 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" (UID: "5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.733127 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.733429 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.733439 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c98w\" (UniqueName: \"kubernetes.io/projected/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a-kube-api-access-4c98w\") on node \"crc\" DevicePath \"\"" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.761272 4877 generic.go:334] "Generic (PLEG): container finished" podID="5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" containerID="e42bd7f6a7e0e3bc2ac6bc506844810cfa37b5a55756ef05246933915e1f9712" exitCode=0 Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.761316 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8kgx" event={"ID":"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a","Type":"ContainerDied","Data":"e42bd7f6a7e0e3bc2ac6bc506844810cfa37b5a55756ef05246933915e1f9712"} Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.761348 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8kgx" event={"ID":"5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a","Type":"ContainerDied","Data":"1338b741abe336cc79ca842bfe11f249ff5b6cbeafc57582f9c951c1426af849"} Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.761355 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8kgx" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.761371 4877 scope.go:117] "RemoveContainer" containerID="e42bd7f6a7e0e3bc2ac6bc506844810cfa37b5a55756ef05246933915e1f9712" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.798990 4877 scope.go:117] "RemoveContainer" containerID="44b928084221622877cfae3b1ac83320ccd6a7a83e1bc3c4d688c427820e9819" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.802426 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8kgx"] Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.811000 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8kgx"] Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.825832 4877 scope.go:117] "RemoveContainer" containerID="1696a43c1bb72b7be29653c7d658c04bb36258ab92df715cca688cf53df57c3c" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.874127 4877 scope.go:117] "RemoveContainer" containerID="e42bd7f6a7e0e3bc2ac6bc506844810cfa37b5a55756ef05246933915e1f9712" Dec 11 18:49:47 crc kubenswrapper[4877]: E1211 18:49:47.875068 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e42bd7f6a7e0e3bc2ac6bc506844810cfa37b5a55756ef05246933915e1f9712\": container with ID starting with e42bd7f6a7e0e3bc2ac6bc506844810cfa37b5a55756ef05246933915e1f9712 not found: ID does not exist" containerID="e42bd7f6a7e0e3bc2ac6bc506844810cfa37b5a55756ef05246933915e1f9712" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.875107 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42bd7f6a7e0e3bc2ac6bc506844810cfa37b5a55756ef05246933915e1f9712"} err="failed to get container status \"e42bd7f6a7e0e3bc2ac6bc506844810cfa37b5a55756ef05246933915e1f9712\": rpc error: code = NotFound desc = could not find container \"e42bd7f6a7e0e3bc2ac6bc506844810cfa37b5a55756ef05246933915e1f9712\": container with ID starting with e42bd7f6a7e0e3bc2ac6bc506844810cfa37b5a55756ef05246933915e1f9712 not found: ID does not exist" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.875135 4877 scope.go:117] "RemoveContainer" containerID="44b928084221622877cfae3b1ac83320ccd6a7a83e1bc3c4d688c427820e9819" Dec 11 18:49:47 crc kubenswrapper[4877]: E1211 18:49:47.875589 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44b928084221622877cfae3b1ac83320ccd6a7a83e1bc3c4d688c427820e9819\": container with ID starting with 44b928084221622877cfae3b1ac83320ccd6a7a83e1bc3c4d688c427820e9819 not found: ID does not exist" containerID="44b928084221622877cfae3b1ac83320ccd6a7a83e1bc3c4d688c427820e9819" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.875659 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44b928084221622877cfae3b1ac83320ccd6a7a83e1bc3c4d688c427820e9819"} err="failed to get container status \"44b928084221622877cfae3b1ac83320ccd6a7a83e1bc3c4d688c427820e9819\": rpc error: code = NotFound desc = could not find container \"44b928084221622877cfae3b1ac83320ccd6a7a83e1bc3c4d688c427820e9819\": container with ID starting with 44b928084221622877cfae3b1ac83320ccd6a7a83e1bc3c4d688c427820e9819 not found: ID does not exist" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.875695 4877 scope.go:117] "RemoveContainer" containerID="1696a43c1bb72b7be29653c7d658c04bb36258ab92df715cca688cf53df57c3c" Dec 11 18:49:47 crc kubenswrapper[4877]: E1211 18:49:47.876037 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1696a43c1bb72b7be29653c7d658c04bb36258ab92df715cca688cf53df57c3c\": container with ID starting with 1696a43c1bb72b7be29653c7d658c04bb36258ab92df715cca688cf53df57c3c not found: ID does not exist" containerID="1696a43c1bb72b7be29653c7d658c04bb36258ab92df715cca688cf53df57c3c" Dec 11 18:49:47 crc kubenswrapper[4877]: I1211 18:49:47.876084 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1696a43c1bb72b7be29653c7d658c04bb36258ab92df715cca688cf53df57c3c"} err="failed to get container status \"1696a43c1bb72b7be29653c7d658c04bb36258ab92df715cca688cf53df57c3c\": rpc error: code = NotFound desc = could not find container \"1696a43c1bb72b7be29653c7d658c04bb36258ab92df715cca688cf53df57c3c\": container with ID starting with 1696a43c1bb72b7be29653c7d658c04bb36258ab92df715cca688cf53df57c3c not found: ID does not exist" Dec 11 18:49:49 crc kubenswrapper[4877]: I1211 18:49:49.229151 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" path="/var/lib/kubelet/pods/5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a/volumes" Dec 11 18:49:56 crc kubenswrapper[4877]: I1211 18:49:56.214949 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:49:56 crc kubenswrapper[4877]: E1211 18:49:56.215609 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:49:56 crc kubenswrapper[4877]: I1211 18:49:56.870623 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2k25c" event={"ID":"fbba94b5-69fd-4956-b1f7-8e397630c287","Type":"ContainerStarted","Data":"4ca8343bc23d8ab7f67a0ec875b7393d5971b77b85e888c4bd2fc8614b5a085e"} Dec 11 18:49:58 crc kubenswrapper[4877]: I1211 18:49:58.891357 4877 generic.go:334] "Generic (PLEG): container finished" podID="fbba94b5-69fd-4956-b1f7-8e397630c287" containerID="4ca8343bc23d8ab7f67a0ec875b7393d5971b77b85e888c4bd2fc8614b5a085e" exitCode=0 Dec 11 18:49:58 crc kubenswrapper[4877]: I1211 18:49:58.891451 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2k25c" event={"ID":"fbba94b5-69fd-4956-b1f7-8e397630c287","Type":"ContainerDied","Data":"4ca8343bc23d8ab7f67a0ec875b7393d5971b77b85e888c4bd2fc8614b5a085e"} Dec 11 18:49:59 crc kubenswrapper[4877]: I1211 18:49:59.904806 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2k25c" event={"ID":"fbba94b5-69fd-4956-b1f7-8e397630c287","Type":"ContainerStarted","Data":"77aa724286d536e101b8e4e72ea00120c6b211c13f2f7041d1a47af37debf26b"} Dec 11 18:49:59 crc kubenswrapper[4877]: I1211 18:49:59.926730 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2k25c" podStartSLOduration=2.0488702 podStartE2EDuration="15.926711529s" podCreationTimestamp="2025-12-11 18:49:44 +0000 UTC" firstStartedPulling="2025-12-11 18:49:45.737138819 +0000 UTC m=+2946.763382863" lastFinishedPulling="2025-12-11 18:49:59.614980118 +0000 UTC m=+2960.641224192" observedRunningTime="2025-12-11 18:49:59.925225579 +0000 UTC m=+2960.951469633" watchObservedRunningTime="2025-12-11 18:49:59.926711529 +0000 UTC m=+2960.952955593" Dec 11 18:50:04 crc kubenswrapper[4877]: I1211 18:50:04.595887 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:50:04 crc kubenswrapper[4877]: I1211 18:50:04.596295 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:50:05 crc kubenswrapper[4877]: I1211 18:50:05.683160 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2k25c" podUID="fbba94b5-69fd-4956-b1f7-8e397630c287" containerName="registry-server" probeResult="failure" output=< Dec 11 18:50:05 crc kubenswrapper[4877]: timeout: failed to connect service ":50051" within 1s Dec 11 18:50:05 crc kubenswrapper[4877]: > Dec 11 18:50:09 crc kubenswrapper[4877]: I1211 18:50:09.221881 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:50:09 crc kubenswrapper[4877]: E1211 18:50:09.223120 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:50:14 crc kubenswrapper[4877]: I1211 18:50:14.668890 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:50:14 crc kubenswrapper[4877]: I1211 18:50:14.736221 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2k25c" Dec 11 18:50:15 crc kubenswrapper[4877]: I1211 18:50:15.298148 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2k25c"] Dec 11 18:50:15 crc kubenswrapper[4877]: I1211 18:50:15.487167 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4f422"] Dec 11 18:50:15 crc kubenswrapper[4877]: I1211 18:50:15.487741 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4f422" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" containerName="registry-server" containerID="cri-o://5c76c5c60c4d7a4ec2b3af3164a37d54dfef5c785209df096aeb25a5fdbe0e0f" gracePeriod=2 Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.065040 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.080625 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22458b44-1dc4-447a-9e42-d5da68cc0e26-utilities\") pod \"22458b44-1dc4-447a-9e42-d5da68cc0e26\" (UID: \"22458b44-1dc4-447a-9e42-d5da68cc0e26\") " Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.080769 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22458b44-1dc4-447a-9e42-d5da68cc0e26-catalog-content\") pod \"22458b44-1dc4-447a-9e42-d5da68cc0e26\" (UID: \"22458b44-1dc4-447a-9e42-d5da68cc0e26\") " Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.080919 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9sp8\" (UniqueName: \"kubernetes.io/projected/22458b44-1dc4-447a-9e42-d5da68cc0e26-kube-api-access-s9sp8\") pod \"22458b44-1dc4-447a-9e42-d5da68cc0e26\" (UID: \"22458b44-1dc4-447a-9e42-d5da68cc0e26\") " Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.081791 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22458b44-1dc4-447a-9e42-d5da68cc0e26-utilities" (OuterVolumeSpecName: "utilities") pod "22458b44-1dc4-447a-9e42-d5da68cc0e26" (UID: "22458b44-1dc4-447a-9e42-d5da68cc0e26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.088455 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22458b44-1dc4-447a-9e42-d5da68cc0e26-kube-api-access-s9sp8" (OuterVolumeSpecName: "kube-api-access-s9sp8") pod "22458b44-1dc4-447a-9e42-d5da68cc0e26" (UID: "22458b44-1dc4-447a-9e42-d5da68cc0e26"). InnerVolumeSpecName "kube-api-access-s9sp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.100901 4877 generic.go:334] "Generic (PLEG): container finished" podID="22458b44-1dc4-447a-9e42-d5da68cc0e26" containerID="5c76c5c60c4d7a4ec2b3af3164a37d54dfef5c785209df096aeb25a5fdbe0e0f" exitCode=0 Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.101264 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4f422" event={"ID":"22458b44-1dc4-447a-9e42-d5da68cc0e26","Type":"ContainerDied","Data":"5c76c5c60c4d7a4ec2b3af3164a37d54dfef5c785209df096aeb25a5fdbe0e0f"} Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.101303 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4f422" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.101328 4877 scope.go:117] "RemoveContainer" containerID="5c76c5c60c4d7a4ec2b3af3164a37d54dfef5c785209df096aeb25a5fdbe0e0f" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.101316 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4f422" event={"ID":"22458b44-1dc4-447a-9e42-d5da68cc0e26","Type":"ContainerDied","Data":"aae2afe922f5190126367f9d2633c1643ec42df83dfe90af7289450d4b83f6e2"} Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.152062 4877 scope.go:117] "RemoveContainer" containerID="f822955dedd08607ced5c96b445576549f3095dc5dd5c58145466105ee60bbfd" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.183570 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22458b44-1dc4-447a-9e42-d5da68cc0e26-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.183607 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9sp8\" (UniqueName: \"kubernetes.io/projected/22458b44-1dc4-447a-9e42-d5da68cc0e26-kube-api-access-s9sp8\") on node \"crc\" DevicePath \"\"" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.189607 4877 scope.go:117] "RemoveContainer" containerID="12f77bdd1171e0d265180433a137e33a74dd45124e107302fdece2f1b741cc66" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.213976 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22458b44-1dc4-447a-9e42-d5da68cc0e26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22458b44-1dc4-447a-9e42-d5da68cc0e26" (UID: "22458b44-1dc4-447a-9e42-d5da68cc0e26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.222730 4877 scope.go:117] "RemoveContainer" containerID="5c76c5c60c4d7a4ec2b3af3164a37d54dfef5c785209df096aeb25a5fdbe0e0f" Dec 11 18:50:16 crc kubenswrapper[4877]: E1211 18:50:16.223665 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c76c5c60c4d7a4ec2b3af3164a37d54dfef5c785209df096aeb25a5fdbe0e0f\": container with ID starting with 5c76c5c60c4d7a4ec2b3af3164a37d54dfef5c785209df096aeb25a5fdbe0e0f not found: ID does not exist" containerID="5c76c5c60c4d7a4ec2b3af3164a37d54dfef5c785209df096aeb25a5fdbe0e0f" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.223781 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c76c5c60c4d7a4ec2b3af3164a37d54dfef5c785209df096aeb25a5fdbe0e0f"} err="failed to get container status \"5c76c5c60c4d7a4ec2b3af3164a37d54dfef5c785209df096aeb25a5fdbe0e0f\": rpc error: code = NotFound desc = could not find container \"5c76c5c60c4d7a4ec2b3af3164a37d54dfef5c785209df096aeb25a5fdbe0e0f\": container with ID starting with 5c76c5c60c4d7a4ec2b3af3164a37d54dfef5c785209df096aeb25a5fdbe0e0f not found: ID does not exist" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.223866 4877 scope.go:117] "RemoveContainer" containerID="f822955dedd08607ced5c96b445576549f3095dc5dd5c58145466105ee60bbfd" Dec 11 18:50:16 crc kubenswrapper[4877]: E1211 18:50:16.224196 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f822955dedd08607ced5c96b445576549f3095dc5dd5c58145466105ee60bbfd\": container with ID starting with f822955dedd08607ced5c96b445576549f3095dc5dd5c58145466105ee60bbfd not found: ID does not exist" containerID="f822955dedd08607ced5c96b445576549f3095dc5dd5c58145466105ee60bbfd" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.224283 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f822955dedd08607ced5c96b445576549f3095dc5dd5c58145466105ee60bbfd"} err="failed to get container status \"f822955dedd08607ced5c96b445576549f3095dc5dd5c58145466105ee60bbfd\": rpc error: code = NotFound desc = could not find container \"f822955dedd08607ced5c96b445576549f3095dc5dd5c58145466105ee60bbfd\": container with ID starting with f822955dedd08607ced5c96b445576549f3095dc5dd5c58145466105ee60bbfd not found: ID does not exist" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.224354 4877 scope.go:117] "RemoveContainer" containerID="12f77bdd1171e0d265180433a137e33a74dd45124e107302fdece2f1b741cc66" Dec 11 18:50:16 crc kubenswrapper[4877]: E1211 18:50:16.224755 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f77bdd1171e0d265180433a137e33a74dd45124e107302fdece2f1b741cc66\": container with ID starting with 12f77bdd1171e0d265180433a137e33a74dd45124e107302fdece2f1b741cc66 not found: ID does not exist" containerID="12f77bdd1171e0d265180433a137e33a74dd45124e107302fdece2f1b741cc66" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.224803 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12f77bdd1171e0d265180433a137e33a74dd45124e107302fdece2f1b741cc66"} err="failed to get container status \"12f77bdd1171e0d265180433a137e33a74dd45124e107302fdece2f1b741cc66\": rpc error: code = NotFound desc = could not find container \"12f77bdd1171e0d265180433a137e33a74dd45124e107302fdece2f1b741cc66\": container with ID starting with 12f77bdd1171e0d265180433a137e33a74dd45124e107302fdece2f1b741cc66 not found: ID does not exist" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.285973 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22458b44-1dc4-447a-9e42-d5da68cc0e26-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.428946 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4f422"] Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.435948 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4f422"] Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.637454 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:50:16 crc kubenswrapper[4877]: I1211 18:50:16.637753 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:50:17 crc kubenswrapper[4877]: I1211 18:50:17.226163 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" path="/var/lib/kubelet/pods/22458b44-1dc4-447a-9e42-d5da68cc0e26/volumes" Dec 11 18:50:23 crc kubenswrapper[4877]: I1211 18:50:23.215203 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:50:23 crc kubenswrapper[4877]: E1211 18:50:23.215967 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:50:24 crc kubenswrapper[4877]: I1211 18:50:24.696273 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-79885c8c-7qj69" podUID="6776094e-cd5a-4539-9b5c-368030c70458" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 11 18:50:34 crc kubenswrapper[4877]: I1211 18:50:34.216319 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:50:34 crc kubenswrapper[4877]: E1211 18:50:34.217624 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:50:46 crc kubenswrapper[4877]: I1211 18:50:46.637396 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:50:46 crc kubenswrapper[4877]: I1211 18:50:46.638020 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:50:48 crc kubenswrapper[4877]: I1211 18:50:48.215726 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:50:48 crc kubenswrapper[4877]: E1211 18:50:48.217618 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:50:59 crc kubenswrapper[4877]: I1211 18:50:59.216901 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:50:59 crc kubenswrapper[4877]: E1211 18:50:59.218363 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:51:11 crc kubenswrapper[4877]: I1211 18:51:11.214931 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:51:11 crc kubenswrapper[4877]: E1211 18:51:11.215791 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:51:16 crc kubenswrapper[4877]: I1211 18:51:16.638242 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:51:16 crc kubenswrapper[4877]: I1211 18:51:16.639496 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:51:16 crc kubenswrapper[4877]: I1211 18:51:16.639558 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:51:16 crc kubenswrapper[4877]: I1211 18:51:16.640434 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 18:51:16 crc kubenswrapper[4877]: I1211 18:51:16.640507 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" gracePeriod=600 Dec 11 18:51:17 crc kubenswrapper[4877]: E1211 18:51:17.267485 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:51:17 crc kubenswrapper[4877]: I1211 18:51:17.788472 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" exitCode=0 Dec 11 18:51:17 crc kubenswrapper[4877]: I1211 18:51:17.788546 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d"} Dec 11 18:51:17 crc kubenswrapper[4877]: I1211 18:51:17.788871 4877 scope.go:117] "RemoveContainer" containerID="77182d5734a42ce0c89a63409fd53002b3b12bfa707960009a1f5aa6b7b03940" Dec 11 18:51:17 crc kubenswrapper[4877]: I1211 18:51:17.790262 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:51:17 crc kubenswrapper[4877]: E1211 18:51:17.790821 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:51:25 crc kubenswrapper[4877]: I1211 18:51:25.215243 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:51:25 crc kubenswrapper[4877]: I1211 18:51:25.875603 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerStarted","Data":"1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1"} Dec 11 18:51:25 crc kubenswrapper[4877]: I1211 18:51:25.876898 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:51:29 crc kubenswrapper[4877]: I1211 18:51:29.235264 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:51:29 crc kubenswrapper[4877]: E1211 18:51:29.236144 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:51:31 crc kubenswrapper[4877]: I1211 18:51:31.147232 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:51:44 crc kubenswrapper[4877]: I1211 18:51:44.217235 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:51:44 crc kubenswrapper[4877]: E1211 18:51:44.217936 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.216706 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:51:58 crc kubenswrapper[4877]: E1211 18:51:58.217489 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.422753 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xw72k"] Dec 11 18:51:58 crc kubenswrapper[4877]: E1211 18:51:58.423535 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" containerName="registry-server" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.423571 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" containerName="registry-server" Dec 11 18:51:58 crc kubenswrapper[4877]: E1211 18:51:58.423589 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" containerName="extract-content" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.423604 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" containerName="extract-content" Dec 11 18:51:58 crc kubenswrapper[4877]: E1211 18:51:58.423639 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" containerName="extract-utilities" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.423653 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" containerName="extract-utilities" Dec 11 18:51:58 crc kubenswrapper[4877]: E1211 18:51:58.423679 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" containerName="extract-utilities" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.423690 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" containerName="extract-utilities" Dec 11 18:51:58 crc kubenswrapper[4877]: E1211 18:51:58.423712 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" containerName="extract-content" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.423724 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" containerName="extract-content" Dec 11 18:51:58 crc kubenswrapper[4877]: E1211 18:51:58.423770 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" containerName="registry-server" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.423783 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" containerName="registry-server" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.424190 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="22458b44-1dc4-447a-9e42-d5da68cc0e26" containerName="registry-server" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.424218 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e1b81bd-40e4-4ce0-821f-dc8fe8a9637a" containerName="registry-server" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.427106 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.440364 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xw72k"] Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.523161 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-utilities\") pod \"certified-operators-xw72k\" (UID: \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\") " pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.523291 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-catalog-content\") pod \"certified-operators-xw72k\" (UID: \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\") " pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.523329 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmbcn\" (UniqueName: \"kubernetes.io/projected/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-kube-api-access-fmbcn\") pod \"certified-operators-xw72k\" (UID: \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\") " pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.624864 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-catalog-content\") pod \"certified-operators-xw72k\" (UID: \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\") " pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.625165 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmbcn\" (UniqueName: \"kubernetes.io/projected/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-kube-api-access-fmbcn\") pod \"certified-operators-xw72k\" (UID: \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\") " pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.625226 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-utilities\") pod \"certified-operators-xw72k\" (UID: \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\") " pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.625700 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-utilities\") pod \"certified-operators-xw72k\" (UID: \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\") " pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.625752 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-catalog-content\") pod \"certified-operators-xw72k\" (UID: \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\") " pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.650006 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmbcn\" (UniqueName: \"kubernetes.io/projected/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-kube-api-access-fmbcn\") pod \"certified-operators-xw72k\" (UID: \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\") " pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:51:58 crc kubenswrapper[4877]: I1211 18:51:58.778071 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:51:59 crc kubenswrapper[4877]: I1211 18:51:59.304440 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xw72k"] Dec 11 18:52:00 crc kubenswrapper[4877]: I1211 18:52:00.222934 4877 generic.go:334] "Generic (PLEG): container finished" podID="f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" containerID="e2446028a4b549d76783f5b0ab728c6717661e99ad7489a9145f6b9f5f99cc1f" exitCode=0 Dec 11 18:52:00 crc kubenswrapper[4877]: I1211 18:52:00.223069 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xw72k" event={"ID":"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e","Type":"ContainerDied","Data":"e2446028a4b549d76783f5b0ab728c6717661e99ad7489a9145f6b9f5f99cc1f"} Dec 11 18:52:00 crc kubenswrapper[4877]: I1211 18:52:00.223723 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xw72k" event={"ID":"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e","Type":"ContainerStarted","Data":"832169f8852e15ffa15ab426f58e355c99be8da67c60997002323cceb9ae9987"} Dec 11 18:52:01 crc kubenswrapper[4877]: I1211 18:52:01.240600 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xw72k" event={"ID":"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e","Type":"ContainerStarted","Data":"26375f7cdc1d9cce479713e36bf81385ccc9c4f65072087ba94795e5874424c6"} Dec 11 18:52:04 crc kubenswrapper[4877]: I1211 18:52:04.274291 4877 generic.go:334] "Generic (PLEG): container finished" podID="f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" containerID="26375f7cdc1d9cce479713e36bf81385ccc9c4f65072087ba94795e5874424c6" exitCode=0 Dec 11 18:52:04 crc kubenswrapper[4877]: I1211 18:52:04.274460 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xw72k" event={"ID":"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e","Type":"ContainerDied","Data":"26375f7cdc1d9cce479713e36bf81385ccc9c4f65072087ba94795e5874424c6"} Dec 11 18:52:05 crc kubenswrapper[4877]: I1211 18:52:05.291325 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xw72k" event={"ID":"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e","Type":"ContainerStarted","Data":"1aad9d03825786896ab9a4b8e1e9f03c5de7d5074cf41f7904be5d9d91dd754e"} Dec 11 18:52:05 crc kubenswrapper[4877]: I1211 18:52:05.328617 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xw72k" podStartSLOduration=2.816700449 podStartE2EDuration="7.328593536s" podCreationTimestamp="2025-12-11 18:51:58 +0000 UTC" firstStartedPulling="2025-12-11 18:52:00.225082798 +0000 UTC m=+3081.251326902" lastFinishedPulling="2025-12-11 18:52:04.736975935 +0000 UTC m=+3085.763219989" observedRunningTime="2025-12-11 18:52:05.314188508 +0000 UTC m=+3086.340432562" watchObservedRunningTime="2025-12-11 18:52:05.328593536 +0000 UTC m=+3086.354837600" Dec 11 18:52:08 crc kubenswrapper[4877]: I1211 18:52:08.778263 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:52:08 crc kubenswrapper[4877]: I1211 18:52:08.778778 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:52:08 crc kubenswrapper[4877]: I1211 18:52:08.834824 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:52:09 crc kubenswrapper[4877]: I1211 18:52:09.426005 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:52:09 crc kubenswrapper[4877]: I1211 18:52:09.493421 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xw72k"] Dec 11 18:52:11 crc kubenswrapper[4877]: I1211 18:52:11.352227 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xw72k" podUID="f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" containerName="registry-server" containerID="cri-o://1aad9d03825786896ab9a4b8e1e9f03c5de7d5074cf41f7904be5d9d91dd754e" gracePeriod=2 Dec 11 18:52:11 crc kubenswrapper[4877]: I1211 18:52:11.919323 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.023434 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmbcn\" (UniqueName: \"kubernetes.io/projected/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-kube-api-access-fmbcn\") pod \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\" (UID: \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\") " Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.023578 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-catalog-content\") pod \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\" (UID: \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\") " Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.023626 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-utilities\") pod \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\" (UID: \"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e\") " Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.024453 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-utilities" (OuterVolumeSpecName: "utilities") pod "f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" (UID: "f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.024808 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.029215 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-kube-api-access-fmbcn" (OuterVolumeSpecName: "kube-api-access-fmbcn") pod "f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" (UID: "f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e"). InnerVolumeSpecName "kube-api-access-fmbcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.077241 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" (UID: "f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.126644 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmbcn\" (UniqueName: \"kubernetes.io/projected/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-kube-api-access-fmbcn\") on node \"crc\" DevicePath \"\"" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.126684 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.368766 4877 generic.go:334] "Generic (PLEG): container finished" podID="f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" containerID="1aad9d03825786896ab9a4b8e1e9f03c5de7d5074cf41f7904be5d9d91dd754e" exitCode=0 Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.368839 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xw72k" event={"ID":"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e","Type":"ContainerDied","Data":"1aad9d03825786896ab9a4b8e1e9f03c5de7d5074cf41f7904be5d9d91dd754e"} Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.368913 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xw72k" event={"ID":"f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e","Type":"ContainerDied","Data":"832169f8852e15ffa15ab426f58e355c99be8da67c60997002323cceb9ae9987"} Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.368944 4877 scope.go:117] "RemoveContainer" containerID="1aad9d03825786896ab9a4b8e1e9f03c5de7d5074cf41f7904be5d9d91dd754e" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.368857 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xw72k" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.423860 4877 scope.go:117] "RemoveContainer" containerID="26375f7cdc1d9cce479713e36bf81385ccc9c4f65072087ba94795e5874424c6" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.431837 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xw72k"] Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.443167 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xw72k"] Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.467506 4877 scope.go:117] "RemoveContainer" containerID="e2446028a4b549d76783f5b0ab728c6717661e99ad7489a9145f6b9f5f99cc1f" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.526159 4877 scope.go:117] "RemoveContainer" containerID="1aad9d03825786896ab9a4b8e1e9f03c5de7d5074cf41f7904be5d9d91dd754e" Dec 11 18:52:12 crc kubenswrapper[4877]: E1211 18:52:12.527103 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aad9d03825786896ab9a4b8e1e9f03c5de7d5074cf41f7904be5d9d91dd754e\": container with ID starting with 1aad9d03825786896ab9a4b8e1e9f03c5de7d5074cf41f7904be5d9d91dd754e not found: ID does not exist" containerID="1aad9d03825786896ab9a4b8e1e9f03c5de7d5074cf41f7904be5d9d91dd754e" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.527157 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aad9d03825786896ab9a4b8e1e9f03c5de7d5074cf41f7904be5d9d91dd754e"} err="failed to get container status \"1aad9d03825786896ab9a4b8e1e9f03c5de7d5074cf41f7904be5d9d91dd754e\": rpc error: code = NotFound desc = could not find container \"1aad9d03825786896ab9a4b8e1e9f03c5de7d5074cf41f7904be5d9d91dd754e\": container with ID starting with 1aad9d03825786896ab9a4b8e1e9f03c5de7d5074cf41f7904be5d9d91dd754e not found: ID does not exist" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.527186 4877 scope.go:117] "RemoveContainer" containerID="26375f7cdc1d9cce479713e36bf81385ccc9c4f65072087ba94795e5874424c6" Dec 11 18:52:12 crc kubenswrapper[4877]: E1211 18:52:12.527738 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26375f7cdc1d9cce479713e36bf81385ccc9c4f65072087ba94795e5874424c6\": container with ID starting with 26375f7cdc1d9cce479713e36bf81385ccc9c4f65072087ba94795e5874424c6 not found: ID does not exist" containerID="26375f7cdc1d9cce479713e36bf81385ccc9c4f65072087ba94795e5874424c6" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.527778 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26375f7cdc1d9cce479713e36bf81385ccc9c4f65072087ba94795e5874424c6"} err="failed to get container status \"26375f7cdc1d9cce479713e36bf81385ccc9c4f65072087ba94795e5874424c6\": rpc error: code = NotFound desc = could not find container \"26375f7cdc1d9cce479713e36bf81385ccc9c4f65072087ba94795e5874424c6\": container with ID starting with 26375f7cdc1d9cce479713e36bf81385ccc9c4f65072087ba94795e5874424c6 not found: ID does not exist" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.527808 4877 scope.go:117] "RemoveContainer" containerID="e2446028a4b549d76783f5b0ab728c6717661e99ad7489a9145f6b9f5f99cc1f" Dec 11 18:52:12 crc kubenswrapper[4877]: E1211 18:52:12.528160 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2446028a4b549d76783f5b0ab728c6717661e99ad7489a9145f6b9f5f99cc1f\": container with ID starting with e2446028a4b549d76783f5b0ab728c6717661e99ad7489a9145f6b9f5f99cc1f not found: ID does not exist" containerID="e2446028a4b549d76783f5b0ab728c6717661e99ad7489a9145f6b9f5f99cc1f" Dec 11 18:52:12 crc kubenswrapper[4877]: I1211 18:52:12.528187 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2446028a4b549d76783f5b0ab728c6717661e99ad7489a9145f6b9f5f99cc1f"} err="failed to get container status \"e2446028a4b549d76783f5b0ab728c6717661e99ad7489a9145f6b9f5f99cc1f\": rpc error: code = NotFound desc = could not find container \"e2446028a4b549d76783f5b0ab728c6717661e99ad7489a9145f6b9f5f99cc1f\": container with ID starting with e2446028a4b549d76783f5b0ab728c6717661e99ad7489a9145f6b9f5f99cc1f not found: ID does not exist" Dec 11 18:52:13 crc kubenswrapper[4877]: I1211 18:52:13.215449 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:52:13 crc kubenswrapper[4877]: E1211 18:52:13.215748 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:52:13 crc kubenswrapper[4877]: I1211 18:52:13.228533 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" path="/var/lib/kubelet/pods/f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e/volumes" Dec 11 18:52:24 crc kubenswrapper[4877]: I1211 18:52:24.216276 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:52:24 crc kubenswrapper[4877]: E1211 18:52:24.219499 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:52:38 crc kubenswrapper[4877]: I1211 18:52:38.215895 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:52:38 crc kubenswrapper[4877]: E1211 18:52:38.217208 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:52:50 crc kubenswrapper[4877]: I1211 18:52:50.216342 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:52:50 crc kubenswrapper[4877]: E1211 18:52:50.217753 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:53:04 crc kubenswrapper[4877]: I1211 18:53:04.215519 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:53:04 crc kubenswrapper[4877]: E1211 18:53:04.216609 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:53:16 crc kubenswrapper[4877]: I1211 18:53:16.216021 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:53:16 crc kubenswrapper[4877]: E1211 18:53:16.216879 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:53:30 crc kubenswrapper[4877]: I1211 18:53:30.215947 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:53:30 crc kubenswrapper[4877]: E1211 18:53:30.216742 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:53:42 crc kubenswrapper[4877]: I1211 18:53:42.215502 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:53:42 crc kubenswrapper[4877]: E1211 18:53:42.216262 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:53:57 crc kubenswrapper[4877]: I1211 18:53:57.215209 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:53:57 crc kubenswrapper[4877]: E1211 18:53:57.216357 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:54:00 crc kubenswrapper[4877]: I1211 18:54:00.606858 4877 generic.go:334] "Generic (PLEG): container finished" podID="53a860ae-4169-4f47-8ba7-032c96b4be3a" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" exitCode=1 Dec 11 18:54:00 crc kubenswrapper[4877]: I1211 18:54:00.606936 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerDied","Data":"1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1"} Dec 11 18:54:00 crc kubenswrapper[4877]: I1211 18:54:00.607555 4877 scope.go:117] "RemoveContainer" containerID="9fbefa12565bac516be41193e39ab735cb0ce7426b4d7431e2918b251cf5bbfc" Dec 11 18:54:00 crc kubenswrapper[4877]: I1211 18:54:00.608297 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:54:00 crc kubenswrapper[4877]: E1211 18:54:00.608777 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:54:01 crc kubenswrapper[4877]: I1211 18:54:01.137473 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:54:01 crc kubenswrapper[4877]: I1211 18:54:01.137559 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:54:01 crc kubenswrapper[4877]: I1211 18:54:01.625902 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:54:01 crc kubenswrapper[4877]: E1211 18:54:01.627459 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:54:10 crc kubenswrapper[4877]: I1211 18:54:10.216158 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:54:10 crc kubenswrapper[4877]: E1211 18:54:10.217257 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:54:15 crc kubenswrapper[4877]: I1211 18:54:15.215467 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:54:15 crc kubenswrapper[4877]: E1211 18:54:15.216219 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:54:25 crc kubenswrapper[4877]: I1211 18:54:25.216277 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:54:25 crc kubenswrapper[4877]: E1211 18:54:25.217432 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:54:27 crc kubenswrapper[4877]: I1211 18:54:27.217611 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:54:27 crc kubenswrapper[4877]: E1211 18:54:27.218789 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:54:39 crc kubenswrapper[4877]: I1211 18:54:39.230309 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:54:39 crc kubenswrapper[4877]: E1211 18:54:39.230998 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:54:40 crc kubenswrapper[4877]: I1211 18:54:40.215607 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:54:40 crc kubenswrapper[4877]: E1211 18:54:40.216723 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:54:51 crc kubenswrapper[4877]: I1211 18:54:51.215906 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:54:51 crc kubenswrapper[4877]: E1211 18:54:51.216953 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:54:52 crc kubenswrapper[4877]: I1211 18:54:52.215461 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:54:52 crc kubenswrapper[4877]: E1211 18:54:52.215921 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:55:03 crc kubenswrapper[4877]: I1211 18:55:03.215806 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:55:03 crc kubenswrapper[4877]: E1211 18:55:03.216359 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:55:04 crc kubenswrapper[4877]: I1211 18:55:04.215237 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:55:04 crc kubenswrapper[4877]: E1211 18:55:04.215709 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:55:18 crc kubenswrapper[4877]: I1211 18:55:18.216618 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:55:18 crc kubenswrapper[4877]: E1211 18:55:18.218734 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:55:18 crc kubenswrapper[4877]: I1211 18:55:18.234940 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:55:18 crc kubenswrapper[4877]: E1211 18:55:18.237932 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:55:30 crc kubenswrapper[4877]: I1211 18:55:30.215940 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:55:30 crc kubenswrapper[4877]: E1211 18:55:30.216700 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:55:32 crc kubenswrapper[4877]: I1211 18:55:32.217064 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:55:32 crc kubenswrapper[4877]: E1211 18:55:32.217758 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:55:41 crc kubenswrapper[4877]: I1211 18:55:41.216291 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:55:41 crc kubenswrapper[4877]: E1211 18:55:41.217287 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:55:47 crc kubenswrapper[4877]: I1211 18:55:47.215410 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:55:47 crc kubenswrapper[4877]: E1211 18:55:47.216217 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:55:52 crc kubenswrapper[4877]: I1211 18:55:52.216609 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:55:52 crc kubenswrapper[4877]: E1211 18:55:52.218029 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:56:02 crc kubenswrapper[4877]: I1211 18:56:02.216410 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:56:02 crc kubenswrapper[4877]: E1211 18:56:02.217953 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:56:05 crc kubenswrapper[4877]: I1211 18:56:05.215125 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:56:05 crc kubenswrapper[4877]: E1211 18:56:05.215806 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:56:13 crc kubenswrapper[4877]: I1211 18:56:13.217215 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:56:13 crc kubenswrapper[4877]: E1211 18:56:13.217990 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 18:56:18 crc kubenswrapper[4877]: I1211 18:56:18.215930 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:56:18 crc kubenswrapper[4877]: E1211 18:56:18.216871 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:56:26 crc kubenswrapper[4877]: I1211 18:56:26.215568 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:56:27 crc kubenswrapper[4877]: I1211 18:56:27.087902 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"9ad5ad385396913aff957a066ae764bcb4e68b5835ff68bcce7357db0aa93e92"} Dec 11 18:56:33 crc kubenswrapper[4877]: I1211 18:56:33.215949 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:56:33 crc kubenswrapper[4877]: E1211 18:56:33.216798 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:56:45 crc kubenswrapper[4877]: I1211 18:56:45.216656 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:56:45 crc kubenswrapper[4877]: E1211 18:56:45.219125 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:57:00 crc kubenswrapper[4877]: I1211 18:57:00.215900 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:57:00 crc kubenswrapper[4877]: E1211 18:57:00.217163 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:57:12 crc kubenswrapper[4877]: I1211 18:57:12.215925 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:57:12 crc kubenswrapper[4877]: E1211 18:57:12.217238 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:57:23 crc kubenswrapper[4877]: I1211 18:57:23.216437 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:57:23 crc kubenswrapper[4877]: E1211 18:57:23.217211 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:57:36 crc kubenswrapper[4877]: I1211 18:57:36.215454 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:57:36 crc kubenswrapper[4877]: E1211 18:57:36.216142 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:57:48 crc kubenswrapper[4877]: I1211 18:57:48.215609 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:57:48 crc kubenswrapper[4877]: E1211 18:57:48.216299 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:58:01 crc kubenswrapper[4877]: I1211 18:58:01.216789 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:58:01 crc kubenswrapper[4877]: E1211 18:58:01.217590 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:58:13 crc kubenswrapper[4877]: I1211 18:58:13.215662 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:58:13 crc kubenswrapper[4877]: E1211 18:58:13.216799 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:58:28 crc kubenswrapper[4877]: I1211 18:58:28.215847 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:58:28 crc kubenswrapper[4877]: E1211 18:58:28.216858 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:58:43 crc kubenswrapper[4877]: I1211 18:58:43.215362 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:58:43 crc kubenswrapper[4877]: E1211 18:58:43.216568 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:58:46 crc kubenswrapper[4877]: I1211 18:58:46.638339 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:58:46 crc kubenswrapper[4877]: I1211 18:58:46.638786 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:58:54 crc kubenswrapper[4877]: I1211 18:58:54.215587 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:58:54 crc kubenswrapper[4877]: E1211 18:58:54.216337 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 18:59:05 crc kubenswrapper[4877]: I1211 18:59:05.909200 4877 generic.go:334] "Generic (PLEG): container finished" podID="60eefae6-1396-4f0a-b52a-7827dca29fb3" containerID="a818848cc28e52eea770c08cd896cf599513ec3ddad5637dd5710a54a63d47d7" exitCode=0 Dec 11 18:59:05 crc kubenswrapper[4877]: I1211 18:59:05.909344 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"60eefae6-1396-4f0a-b52a-7827dca29fb3","Type":"ContainerDied","Data":"a818848cc28e52eea770c08cd896cf599513ec3ddad5637dd5710a54a63d47d7"} Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.382977 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.540460 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-ssh-key\") pod \"60eefae6-1396-4f0a-b52a-7827dca29fb3\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.540597 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60eefae6-1396-4f0a-b52a-7827dca29fb3-openstack-config\") pod \"60eefae6-1396-4f0a-b52a-7827dca29fb3\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.540627 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-openstack-config-secret\") pod \"60eefae6-1396-4f0a-b52a-7827dca29fb3\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.540660 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60eefae6-1396-4f0a-b52a-7827dca29fb3-config-data\") pod \"60eefae6-1396-4f0a-b52a-7827dca29fb3\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.540709 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbsmn\" (UniqueName: \"kubernetes.io/projected/60eefae6-1396-4f0a-b52a-7827dca29fb3-kube-api-access-jbsmn\") pod \"60eefae6-1396-4f0a-b52a-7827dca29fb3\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.540738 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/60eefae6-1396-4f0a-b52a-7827dca29fb3-test-operator-ephemeral-workdir\") pod \"60eefae6-1396-4f0a-b52a-7827dca29fb3\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.540773 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-ca-certs\") pod \"60eefae6-1396-4f0a-b52a-7827dca29fb3\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.540827 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/60eefae6-1396-4f0a-b52a-7827dca29fb3-test-operator-ephemeral-temporary\") pod \"60eefae6-1396-4f0a-b52a-7827dca29fb3\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.540922 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"60eefae6-1396-4f0a-b52a-7827dca29fb3\" (UID: \"60eefae6-1396-4f0a-b52a-7827dca29fb3\") " Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.541956 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60eefae6-1396-4f0a-b52a-7827dca29fb3-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "60eefae6-1396-4f0a-b52a-7827dca29fb3" (UID: "60eefae6-1396-4f0a-b52a-7827dca29fb3"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.542549 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60eefae6-1396-4f0a-b52a-7827dca29fb3-config-data" (OuterVolumeSpecName: "config-data") pod "60eefae6-1396-4f0a-b52a-7827dca29fb3" (UID: "60eefae6-1396-4f0a-b52a-7827dca29fb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.549912 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60eefae6-1396-4f0a-b52a-7827dca29fb3-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "60eefae6-1396-4f0a-b52a-7827dca29fb3" (UID: "60eefae6-1396-4f0a-b52a-7827dca29fb3"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.550025 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "60eefae6-1396-4f0a-b52a-7827dca29fb3" (UID: "60eefae6-1396-4f0a-b52a-7827dca29fb3"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.555625 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60eefae6-1396-4f0a-b52a-7827dca29fb3-kube-api-access-jbsmn" (OuterVolumeSpecName: "kube-api-access-jbsmn") pod "60eefae6-1396-4f0a-b52a-7827dca29fb3" (UID: "60eefae6-1396-4f0a-b52a-7827dca29fb3"). InnerVolumeSpecName "kube-api-access-jbsmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.570633 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "60eefae6-1396-4f0a-b52a-7827dca29fb3" (UID: "60eefae6-1396-4f0a-b52a-7827dca29fb3"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.576368 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "60eefae6-1396-4f0a-b52a-7827dca29fb3" (UID: "60eefae6-1396-4f0a-b52a-7827dca29fb3"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.595326 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60eefae6-1396-4f0a-b52a-7827dca29fb3-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "60eefae6-1396-4f0a-b52a-7827dca29fb3" (UID: "60eefae6-1396-4f0a-b52a-7827dca29fb3"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.596563 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "60eefae6-1396-4f0a-b52a-7827dca29fb3" (UID: "60eefae6-1396-4f0a-b52a-7827dca29fb3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.643943 4877 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.644034 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/60eefae6-1396-4f0a-b52a-7827dca29fb3-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.644057 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbsmn\" (UniqueName: \"kubernetes.io/projected/60eefae6-1396-4f0a-b52a-7827dca29fb3-kube-api-access-jbsmn\") on node \"crc\" DevicePath \"\"" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.644078 4877 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/60eefae6-1396-4f0a-b52a-7827dca29fb3-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.644104 4877 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.644131 4877 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/60eefae6-1396-4f0a-b52a-7827dca29fb3-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.644222 4877 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.644255 4877 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/60eefae6-1396-4f0a-b52a-7827dca29fb3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.644278 4877 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60eefae6-1396-4f0a-b52a-7827dca29fb3-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.670033 4877 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.745895 4877 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.932470 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"60eefae6-1396-4f0a-b52a-7827dca29fb3","Type":"ContainerDied","Data":"7a8b3d44864bd441a32c816463785903247befca0c0505e470e10c5b18d071bf"} Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.932525 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8b3d44864bd441a32c816463785903247befca0c0505e470e10c5b18d071bf" Dec 11 18:59:07 crc kubenswrapper[4877]: I1211 18:59:07.932528 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 11 18:59:08 crc kubenswrapper[4877]: I1211 18:59:08.215729 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 18:59:08 crc kubenswrapper[4877]: I1211 18:59:08.950819 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerStarted","Data":"e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68"} Dec 11 18:59:08 crc kubenswrapper[4877]: I1211 18:59:08.953320 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.539425 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 11 18:59:15 crc kubenswrapper[4877]: E1211 18:59:15.541446 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" containerName="extract-utilities" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.541563 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" containerName="extract-utilities" Dec 11 18:59:15 crc kubenswrapper[4877]: E1211 18:59:15.541649 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" containerName="registry-server" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.541726 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" containerName="registry-server" Dec 11 18:59:15 crc kubenswrapper[4877]: E1211 18:59:15.541834 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60eefae6-1396-4f0a-b52a-7827dca29fb3" containerName="tempest-tests-tempest-tests-runner" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.541913 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="60eefae6-1396-4f0a-b52a-7827dca29fb3" containerName="tempest-tests-tempest-tests-runner" Dec 11 18:59:15 crc kubenswrapper[4877]: E1211 18:59:15.542017 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" containerName="extract-content" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.542093 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" containerName="extract-content" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.542436 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="60eefae6-1396-4f0a-b52a-7827dca29fb3" containerName="tempest-tests-tempest-tests-runner" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.542543 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ad8fc4-50f3-4ef1-90f5-465a69ed8f6e" containerName="registry-server" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.543366 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.546277 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-l4jxc" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.611124 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.720095 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c42qv\" (UniqueName: \"kubernetes.io/projected/4dc36034-313e-409f-86f4-e69f9ae0ee24-kube-api-access-c42qv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4dc36034-313e-409f-86f4-e69f9ae0ee24\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.720281 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4dc36034-313e-409f-86f4-e69f9ae0ee24\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.822221 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c42qv\" (UniqueName: \"kubernetes.io/projected/4dc36034-313e-409f-86f4-e69f9ae0ee24-kube-api-access-c42qv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4dc36034-313e-409f-86f4-e69f9ae0ee24\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.822365 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4dc36034-313e-409f-86f4-e69f9ae0ee24\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.822943 4877 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4dc36034-313e-409f-86f4-e69f9ae0ee24\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.842058 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c42qv\" (UniqueName: \"kubernetes.io/projected/4dc36034-313e-409f-86f4-e69f9ae0ee24-kube-api-access-c42qv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4dc36034-313e-409f-86f4-e69f9ae0ee24\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.850360 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4dc36034-313e-409f-86f4-e69f9ae0ee24\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 18:59:15 crc kubenswrapper[4877]: I1211 18:59:15.916341 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 18:59:16 crc kubenswrapper[4877]: I1211 18:59:16.376132 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 11 18:59:16 crc kubenswrapper[4877]: I1211 18:59:16.383829 4877 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 18:59:16 crc kubenswrapper[4877]: I1211 18:59:16.637498 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:59:16 crc kubenswrapper[4877]: I1211 18:59:16.637558 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:59:17 crc kubenswrapper[4877]: I1211 18:59:17.043271 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"4dc36034-313e-409f-86f4-e69f9ae0ee24","Type":"ContainerStarted","Data":"f65ae7d5c0febf4202b1f26f0db7afe37bea497648f3ec3e2323be3f4ca272e0"} Dec 11 18:59:18 crc kubenswrapper[4877]: I1211 18:59:18.057464 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"4dc36034-313e-409f-86f4-e69f9ae0ee24","Type":"ContainerStarted","Data":"1384c8e0fd42ff47c3b2e0ff5702f5575a44389a63fc54295eb6a7f8d3010fe8"} Dec 11 18:59:18 crc kubenswrapper[4877]: I1211 18:59:18.078248 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.013541087 podStartE2EDuration="3.078228564s" podCreationTimestamp="2025-12-11 18:59:15 +0000 UTC" firstStartedPulling="2025-12-11 18:59:16.383567906 +0000 UTC m=+3517.409811950" lastFinishedPulling="2025-12-11 18:59:17.448255373 +0000 UTC m=+3518.474499427" observedRunningTime="2025-12-11 18:59:18.072877512 +0000 UTC m=+3519.099121556" watchObservedRunningTime="2025-12-11 18:59:18.078228564 +0000 UTC m=+3519.104472618" Dec 11 18:59:21 crc kubenswrapper[4877]: I1211 18:59:21.146026 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 18:59:41 crc kubenswrapper[4877]: I1211 18:59:41.230360 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ssbsl/must-gather-jm5tr"] Dec 11 18:59:41 crc kubenswrapper[4877]: I1211 18:59:41.232253 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/must-gather-jm5tr" Dec 11 18:59:41 crc kubenswrapper[4877]: I1211 18:59:41.234886 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ssbsl"/"openshift-service-ca.crt" Dec 11 18:59:41 crc kubenswrapper[4877]: I1211 18:59:41.234954 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ssbsl"/"default-dockercfg-96pmb" Dec 11 18:59:41 crc kubenswrapper[4877]: I1211 18:59:41.235179 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ssbsl"/"kube-root-ca.crt" Dec 11 18:59:41 crc kubenswrapper[4877]: I1211 18:59:41.240266 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ssbsl/must-gather-jm5tr"] Dec 11 18:59:41 crc kubenswrapper[4877]: I1211 18:59:41.274959 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxdqk\" (UniqueName: \"kubernetes.io/projected/a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693-kube-api-access-sxdqk\") pod \"must-gather-jm5tr\" (UID: \"a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693\") " pod="openshift-must-gather-ssbsl/must-gather-jm5tr" Dec 11 18:59:41 crc kubenswrapper[4877]: I1211 18:59:41.275165 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693-must-gather-output\") pod \"must-gather-jm5tr\" (UID: \"a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693\") " pod="openshift-must-gather-ssbsl/must-gather-jm5tr" Dec 11 18:59:41 crc kubenswrapper[4877]: I1211 18:59:41.377105 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxdqk\" (UniqueName: \"kubernetes.io/projected/a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693-kube-api-access-sxdqk\") pod \"must-gather-jm5tr\" (UID: \"a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693\") " pod="openshift-must-gather-ssbsl/must-gather-jm5tr" Dec 11 18:59:41 crc kubenswrapper[4877]: I1211 18:59:41.377273 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693-must-gather-output\") pod \"must-gather-jm5tr\" (UID: \"a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693\") " pod="openshift-must-gather-ssbsl/must-gather-jm5tr" Dec 11 18:59:41 crc kubenswrapper[4877]: I1211 18:59:41.377732 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693-must-gather-output\") pod \"must-gather-jm5tr\" (UID: \"a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693\") " pod="openshift-must-gather-ssbsl/must-gather-jm5tr" Dec 11 18:59:41 crc kubenswrapper[4877]: I1211 18:59:41.394027 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxdqk\" (UniqueName: \"kubernetes.io/projected/a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693-kube-api-access-sxdqk\") pod \"must-gather-jm5tr\" (UID: \"a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693\") " pod="openshift-must-gather-ssbsl/must-gather-jm5tr" Dec 11 18:59:41 crc kubenswrapper[4877]: I1211 18:59:41.554522 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/must-gather-jm5tr" Dec 11 18:59:42 crc kubenswrapper[4877]: I1211 18:59:42.050285 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ssbsl/must-gather-jm5tr"] Dec 11 18:59:42 crc kubenswrapper[4877]: W1211 18:59:42.057503 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5ba0c98_95b6_4e1a_8fac_d5c2ea26e693.slice/crio-9fda8c91d5af6b340444dfc6c2f9bacc2c632270322acdc00c7fa72f13f6e7c2 WatchSource:0}: Error finding container 9fda8c91d5af6b340444dfc6c2f9bacc2c632270322acdc00c7fa72f13f6e7c2: Status 404 returned error can't find the container with id 9fda8c91d5af6b340444dfc6c2f9bacc2c632270322acdc00c7fa72f13f6e7c2 Dec 11 18:59:42 crc kubenswrapper[4877]: I1211 18:59:42.326700 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ssbsl/must-gather-jm5tr" event={"ID":"a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693","Type":"ContainerStarted","Data":"9fda8c91d5af6b340444dfc6c2f9bacc2c632270322acdc00c7fa72f13f6e7c2"} Dec 11 18:59:46 crc kubenswrapper[4877]: I1211 18:59:46.637500 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 18:59:46 crc kubenswrapper[4877]: I1211 18:59:46.638147 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 18:59:46 crc kubenswrapper[4877]: I1211 18:59:46.638204 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 18:59:46 crc kubenswrapper[4877]: I1211 18:59:46.639054 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ad5ad385396913aff957a066ae764bcb4e68b5835ff68bcce7357db0aa93e92"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 18:59:46 crc kubenswrapper[4877]: I1211 18:59:46.639214 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://9ad5ad385396913aff957a066ae764bcb4e68b5835ff68bcce7357db0aa93e92" gracePeriod=600 Dec 11 18:59:47 crc kubenswrapper[4877]: I1211 18:59:47.398240 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="9ad5ad385396913aff957a066ae764bcb4e68b5835ff68bcce7357db0aa93e92" exitCode=0 Dec 11 18:59:47 crc kubenswrapper[4877]: I1211 18:59:47.398292 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"9ad5ad385396913aff957a066ae764bcb4e68b5835ff68bcce7357db0aa93e92"} Dec 11 18:59:47 crc kubenswrapper[4877]: I1211 18:59:47.398329 4877 scope.go:117] "RemoveContainer" containerID="0d611111bac434400a8308e0e3ffcab8c8795a23b0e1421de2f8ab0130d51c3d" Dec 11 18:59:49 crc kubenswrapper[4877]: I1211 18:59:49.427672 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41"} Dec 11 18:59:49 crc kubenswrapper[4877]: I1211 18:59:49.433122 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ssbsl/must-gather-jm5tr" event={"ID":"a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693","Type":"ContainerStarted","Data":"a13690f25e3126b418025215bf4221a250bc3a1bbda27075b56c16a4aa1b919e"} Dec 11 18:59:49 crc kubenswrapper[4877]: I1211 18:59:49.433179 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ssbsl/must-gather-jm5tr" event={"ID":"a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693","Type":"ContainerStarted","Data":"4c4c5e8f28edd1290f10d0d09fa142106e844fe14c26373ceb05b245374bd852"} Dec 11 18:59:49 crc kubenswrapper[4877]: I1211 18:59:49.483170 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ssbsl/must-gather-jm5tr" podStartSLOduration=2.335476579 podStartE2EDuration="8.483144331s" podCreationTimestamp="2025-12-11 18:59:41 +0000 UTC" firstStartedPulling="2025-12-11 18:59:42.061975371 +0000 UTC m=+3543.088219415" lastFinishedPulling="2025-12-11 18:59:48.209643103 +0000 UTC m=+3549.235887167" observedRunningTime="2025-12-11 18:59:49.468960615 +0000 UTC m=+3550.495204659" watchObservedRunningTime="2025-12-11 18:59:49.483144331 +0000 UTC m=+3550.509388365" Dec 11 18:59:52 crc kubenswrapper[4877]: I1211 18:59:52.532243 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ssbsl/crc-debug-ntr7t"] Dec 11 18:59:52 crc kubenswrapper[4877]: I1211 18:59:52.535752 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/crc-debug-ntr7t" Dec 11 18:59:52 crc kubenswrapper[4877]: I1211 18:59:52.612175 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dblpd\" (UniqueName: \"kubernetes.io/projected/f0a1d4c7-45fe-4644-9c71-b2e839813b50-kube-api-access-dblpd\") pod \"crc-debug-ntr7t\" (UID: \"f0a1d4c7-45fe-4644-9c71-b2e839813b50\") " pod="openshift-must-gather-ssbsl/crc-debug-ntr7t" Dec 11 18:59:52 crc kubenswrapper[4877]: I1211 18:59:52.612797 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0a1d4c7-45fe-4644-9c71-b2e839813b50-host\") pod \"crc-debug-ntr7t\" (UID: \"f0a1d4c7-45fe-4644-9c71-b2e839813b50\") " pod="openshift-must-gather-ssbsl/crc-debug-ntr7t" Dec 11 18:59:52 crc kubenswrapper[4877]: I1211 18:59:52.715013 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dblpd\" (UniqueName: \"kubernetes.io/projected/f0a1d4c7-45fe-4644-9c71-b2e839813b50-kube-api-access-dblpd\") pod \"crc-debug-ntr7t\" (UID: \"f0a1d4c7-45fe-4644-9c71-b2e839813b50\") " pod="openshift-must-gather-ssbsl/crc-debug-ntr7t" Dec 11 18:59:52 crc kubenswrapper[4877]: I1211 18:59:52.715589 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0a1d4c7-45fe-4644-9c71-b2e839813b50-host\") pod \"crc-debug-ntr7t\" (UID: \"f0a1d4c7-45fe-4644-9c71-b2e839813b50\") " pod="openshift-must-gather-ssbsl/crc-debug-ntr7t" Dec 11 18:59:52 crc kubenswrapper[4877]: I1211 18:59:52.715732 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0a1d4c7-45fe-4644-9c71-b2e839813b50-host\") pod \"crc-debug-ntr7t\" (UID: \"f0a1d4c7-45fe-4644-9c71-b2e839813b50\") " pod="openshift-must-gather-ssbsl/crc-debug-ntr7t" Dec 11 18:59:52 crc kubenswrapper[4877]: I1211 18:59:52.743126 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dblpd\" (UniqueName: \"kubernetes.io/projected/f0a1d4c7-45fe-4644-9c71-b2e839813b50-kube-api-access-dblpd\") pod \"crc-debug-ntr7t\" (UID: \"f0a1d4c7-45fe-4644-9c71-b2e839813b50\") " pod="openshift-must-gather-ssbsl/crc-debug-ntr7t" Dec 11 18:59:52 crc kubenswrapper[4877]: I1211 18:59:52.855654 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/crc-debug-ntr7t" Dec 11 18:59:52 crc kubenswrapper[4877]: W1211 18:59:52.897888 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0a1d4c7_45fe_4644_9c71_b2e839813b50.slice/crio-1d7b92aa92cb451a298ece1630c554135222e0cf2ff0b574f0647312be9e4197 WatchSource:0}: Error finding container 1d7b92aa92cb451a298ece1630c554135222e0cf2ff0b574f0647312be9e4197: Status 404 returned error can't find the container with id 1d7b92aa92cb451a298ece1630c554135222e0cf2ff0b574f0647312be9e4197 Dec 11 18:59:53 crc kubenswrapper[4877]: I1211 18:59:53.475114 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ssbsl/crc-debug-ntr7t" event={"ID":"f0a1d4c7-45fe-4644-9c71-b2e839813b50","Type":"ContainerStarted","Data":"1d7b92aa92cb451a298ece1630c554135222e0cf2ff0b574f0647312be9e4197"} Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.165063 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk"] Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.167562 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.169575 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.169763 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.173175 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk"] Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.263021 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n7w5\" (UniqueName: \"kubernetes.io/projected/c7612e29-b761-465c-9602-1d6432c7f05a-kube-api-access-8n7w5\") pod \"collect-profiles-29424660-8ggmk\" (UID: \"c7612e29-b761-465c-9602-1d6432c7f05a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.263318 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7612e29-b761-465c-9602-1d6432c7f05a-config-volume\") pod \"collect-profiles-29424660-8ggmk\" (UID: \"c7612e29-b761-465c-9602-1d6432c7f05a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.263539 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7612e29-b761-465c-9602-1d6432c7f05a-secret-volume\") pod \"collect-profiles-29424660-8ggmk\" (UID: \"c7612e29-b761-465c-9602-1d6432c7f05a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.364996 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n7w5\" (UniqueName: \"kubernetes.io/projected/c7612e29-b761-465c-9602-1d6432c7f05a-kube-api-access-8n7w5\") pod \"collect-profiles-29424660-8ggmk\" (UID: \"c7612e29-b761-465c-9602-1d6432c7f05a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.365697 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7612e29-b761-465c-9602-1d6432c7f05a-config-volume\") pod \"collect-profiles-29424660-8ggmk\" (UID: \"c7612e29-b761-465c-9602-1d6432c7f05a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.366786 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7612e29-b761-465c-9602-1d6432c7f05a-secret-volume\") pod \"collect-profiles-29424660-8ggmk\" (UID: \"c7612e29-b761-465c-9602-1d6432c7f05a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.366685 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7612e29-b761-465c-9602-1d6432c7f05a-config-volume\") pod \"collect-profiles-29424660-8ggmk\" (UID: \"c7612e29-b761-465c-9602-1d6432c7f05a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.373223 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7612e29-b761-465c-9602-1d6432c7f05a-secret-volume\") pod \"collect-profiles-29424660-8ggmk\" (UID: \"c7612e29-b761-465c-9602-1d6432c7f05a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.384765 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n7w5\" (UniqueName: \"kubernetes.io/projected/c7612e29-b761-465c-9602-1d6432c7f05a-kube-api-access-8n7w5\") pod \"collect-profiles-29424660-8ggmk\" (UID: \"c7612e29-b761-465c-9602-1d6432c7f05a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" Dec 11 19:00:00 crc kubenswrapper[4877]: I1211 19:00:00.496330 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" Dec 11 19:00:04 crc kubenswrapper[4877]: I1211 19:00:04.255176 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk"] Dec 11 19:00:04 crc kubenswrapper[4877]: W1211 19:00:04.262350 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7612e29_b761_465c_9602_1d6432c7f05a.slice/crio-a4cf275d743de96731c59e0eca29ef812ccdf3fe396bd9091f83747841568a01 WatchSource:0}: Error finding container a4cf275d743de96731c59e0eca29ef812ccdf3fe396bd9091f83747841568a01: Status 404 returned error can't find the container with id a4cf275d743de96731c59e0eca29ef812ccdf3fe396bd9091f83747841568a01 Dec 11 19:00:04 crc kubenswrapper[4877]: I1211 19:00:04.586495 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ssbsl/crc-debug-ntr7t" event={"ID":"f0a1d4c7-45fe-4644-9c71-b2e839813b50","Type":"ContainerStarted","Data":"dd6187e6c4dc0172e07cb359f8e77e92ce24627996931cf3fb47a0d05188b107"} Dec 11 19:00:04 crc kubenswrapper[4877]: I1211 19:00:04.588149 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" event={"ID":"c7612e29-b761-465c-9602-1d6432c7f05a","Type":"ContainerStarted","Data":"0ce2916ab71be04e865565e41d84f5d2603e4e400f6d0d2f6cefe9483842451f"} Dec 11 19:00:04 crc kubenswrapper[4877]: I1211 19:00:04.588170 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" event={"ID":"c7612e29-b761-465c-9602-1d6432c7f05a","Type":"ContainerStarted","Data":"a4cf275d743de96731c59e0eca29ef812ccdf3fe396bd9091f83747841568a01"} Dec 11 19:00:04 crc kubenswrapper[4877]: I1211 19:00:04.617254 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ssbsl/crc-debug-ntr7t" podStartSLOduration=1.65436977 podStartE2EDuration="12.617234041s" podCreationTimestamp="2025-12-11 18:59:52 +0000 UTC" firstStartedPulling="2025-12-11 18:59:52.901093803 +0000 UTC m=+3553.927337847" lastFinishedPulling="2025-12-11 19:00:03.863958074 +0000 UTC m=+3564.890202118" observedRunningTime="2025-12-11 19:00:04.607623516 +0000 UTC m=+3565.633867570" watchObservedRunningTime="2025-12-11 19:00:04.617234041 +0000 UTC m=+3565.643478085" Dec 11 19:00:04 crc kubenswrapper[4877]: I1211 19:00:04.633962 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" podStartSLOduration=4.633937063 podStartE2EDuration="4.633937063s" podCreationTimestamp="2025-12-11 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 19:00:04.623869366 +0000 UTC m=+3565.650113410" watchObservedRunningTime="2025-12-11 19:00:04.633937063 +0000 UTC m=+3565.660181107" Dec 11 19:00:05 crc kubenswrapper[4877]: I1211 19:00:05.599493 4877 generic.go:334] "Generic (PLEG): container finished" podID="c7612e29-b761-465c-9602-1d6432c7f05a" containerID="0ce2916ab71be04e865565e41d84f5d2603e4e400f6d0d2f6cefe9483842451f" exitCode=0 Dec 11 19:00:05 crc kubenswrapper[4877]: I1211 19:00:05.599592 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" event={"ID":"c7612e29-b761-465c-9602-1d6432c7f05a","Type":"ContainerDied","Data":"0ce2916ab71be04e865565e41d84f5d2603e4e400f6d0d2f6cefe9483842451f"} Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.008438 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.115853 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7612e29-b761-465c-9602-1d6432c7f05a-secret-volume\") pod \"c7612e29-b761-465c-9602-1d6432c7f05a\" (UID: \"c7612e29-b761-465c-9602-1d6432c7f05a\") " Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.116412 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7612e29-b761-465c-9602-1d6432c7f05a-config-volume\") pod \"c7612e29-b761-465c-9602-1d6432c7f05a\" (UID: \"c7612e29-b761-465c-9602-1d6432c7f05a\") " Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.116526 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n7w5\" (UniqueName: \"kubernetes.io/projected/c7612e29-b761-465c-9602-1d6432c7f05a-kube-api-access-8n7w5\") pod \"c7612e29-b761-465c-9602-1d6432c7f05a\" (UID: \"c7612e29-b761-465c-9602-1d6432c7f05a\") " Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.117102 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7612e29-b761-465c-9602-1d6432c7f05a-config-volume" (OuterVolumeSpecName: "config-volume") pod "c7612e29-b761-465c-9602-1d6432c7f05a" (UID: "c7612e29-b761-465c-9602-1d6432c7f05a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.121498 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7612e29-b761-465c-9602-1d6432c7f05a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c7612e29-b761-465c-9602-1d6432c7f05a" (UID: "c7612e29-b761-465c-9602-1d6432c7f05a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.122514 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7612e29-b761-465c-9602-1d6432c7f05a-kube-api-access-8n7w5" (OuterVolumeSpecName: "kube-api-access-8n7w5") pod "c7612e29-b761-465c-9602-1d6432c7f05a" (UID: "c7612e29-b761-465c-9602-1d6432c7f05a"). InnerVolumeSpecName "kube-api-access-8n7w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.218606 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n7w5\" (UniqueName: \"kubernetes.io/projected/c7612e29-b761-465c-9602-1d6432c7f05a-kube-api-access-8n7w5\") on node \"crc\" DevicePath \"\"" Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.218658 4877 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7612e29-b761-465c-9602-1d6432c7f05a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.218672 4877 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7612e29-b761-465c-9602-1d6432c7f05a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.304366 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs"] Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.314364 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424615-65zjs"] Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.618264 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" event={"ID":"c7612e29-b761-465c-9602-1d6432c7f05a","Type":"ContainerDied","Data":"a4cf275d743de96731c59e0eca29ef812ccdf3fe396bd9091f83747841568a01"} Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.618308 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4cf275d743de96731c59e0eca29ef812ccdf3fe396bd9091f83747841568a01" Dec 11 19:00:07 crc kubenswrapper[4877]: I1211 19:00:07.618370 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424660-8ggmk" Dec 11 19:00:09 crc kubenswrapper[4877]: I1211 19:00:09.227365 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de7ea9a-363d-43ef-82ee-39d39cd1261e" path="/var/lib/kubelet/pods/4de7ea9a-363d-43ef-82ee-39d39cd1261e/volumes" Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.571100 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-82v59"] Dec 11 19:00:19 crc kubenswrapper[4877]: E1211 19:00:19.572244 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7612e29-b761-465c-9602-1d6432c7f05a" containerName="collect-profiles" Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.572259 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7612e29-b761-465c-9602-1d6432c7f05a" containerName="collect-profiles" Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.572521 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7612e29-b761-465c-9602-1d6432c7f05a" containerName="collect-profiles" Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.574169 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.595331 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-82v59"] Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.637527 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6bhl\" (UniqueName: \"kubernetes.io/projected/44a7dac5-5de3-47b2-a14e-7e62193ec589-kube-api-access-t6bhl\") pod \"community-operators-82v59\" (UID: \"44a7dac5-5de3-47b2-a14e-7e62193ec589\") " pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.637575 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44a7dac5-5de3-47b2-a14e-7e62193ec589-utilities\") pod \"community-operators-82v59\" (UID: \"44a7dac5-5de3-47b2-a14e-7e62193ec589\") " pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.637612 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44a7dac5-5de3-47b2-a14e-7e62193ec589-catalog-content\") pod \"community-operators-82v59\" (UID: \"44a7dac5-5de3-47b2-a14e-7e62193ec589\") " pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.739476 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6bhl\" (UniqueName: \"kubernetes.io/projected/44a7dac5-5de3-47b2-a14e-7e62193ec589-kube-api-access-t6bhl\") pod \"community-operators-82v59\" (UID: \"44a7dac5-5de3-47b2-a14e-7e62193ec589\") " pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.739803 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44a7dac5-5de3-47b2-a14e-7e62193ec589-utilities\") pod \"community-operators-82v59\" (UID: \"44a7dac5-5de3-47b2-a14e-7e62193ec589\") " pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.739825 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44a7dac5-5de3-47b2-a14e-7e62193ec589-catalog-content\") pod \"community-operators-82v59\" (UID: \"44a7dac5-5de3-47b2-a14e-7e62193ec589\") " pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.740202 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44a7dac5-5de3-47b2-a14e-7e62193ec589-utilities\") pod \"community-operators-82v59\" (UID: \"44a7dac5-5de3-47b2-a14e-7e62193ec589\") " pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.740291 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44a7dac5-5de3-47b2-a14e-7e62193ec589-catalog-content\") pod \"community-operators-82v59\" (UID: \"44a7dac5-5de3-47b2-a14e-7e62193ec589\") " pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.769598 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6bhl\" (UniqueName: \"kubernetes.io/projected/44a7dac5-5de3-47b2-a14e-7e62193ec589-kube-api-access-t6bhl\") pod \"community-operators-82v59\" (UID: \"44a7dac5-5de3-47b2-a14e-7e62193ec589\") " pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:19 crc kubenswrapper[4877]: I1211 19:00:19.896847 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:20 crc kubenswrapper[4877]: I1211 19:00:20.450233 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-82v59"] Dec 11 19:00:20 crc kubenswrapper[4877]: I1211 19:00:20.743829 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82v59" event={"ID":"44a7dac5-5de3-47b2-a14e-7e62193ec589","Type":"ContainerStarted","Data":"cbe5991cc62a34fe1031e1009757f27c17d0278a7c54bfdf707ffc0b7c94317e"} Dec 11 19:00:22 crc kubenswrapper[4877]: I1211 19:00:22.763338 4877 generic.go:334] "Generic (PLEG): container finished" podID="44a7dac5-5de3-47b2-a14e-7e62193ec589" containerID="f05d0f2d18c8f63359945c4e77388eba576419d64f84ee099fad9213b0a518b2" exitCode=0 Dec 11 19:00:22 crc kubenswrapper[4877]: I1211 19:00:22.763417 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82v59" event={"ID":"44a7dac5-5de3-47b2-a14e-7e62193ec589","Type":"ContainerDied","Data":"f05d0f2d18c8f63359945c4e77388eba576419d64f84ee099fad9213b0a518b2"} Dec 11 19:00:23 crc kubenswrapper[4877]: I1211 19:00:23.774574 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82v59" event={"ID":"44a7dac5-5de3-47b2-a14e-7e62193ec589","Type":"ContainerStarted","Data":"7d13e968aeca9868175913bb3753f861503ca6931acb69d335693b0a0113013f"} Dec 11 19:00:24 crc kubenswrapper[4877]: I1211 19:00:24.785495 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82v59" event={"ID":"44a7dac5-5de3-47b2-a14e-7e62193ec589","Type":"ContainerDied","Data":"7d13e968aeca9868175913bb3753f861503ca6931acb69d335693b0a0113013f"} Dec 11 19:00:24 crc kubenswrapper[4877]: I1211 19:00:24.785174 4877 generic.go:334] "Generic (PLEG): container finished" podID="44a7dac5-5de3-47b2-a14e-7e62193ec589" containerID="7d13e968aeca9868175913bb3753f861503ca6931acb69d335693b0a0113013f" exitCode=0 Dec 11 19:00:25 crc kubenswrapper[4877]: I1211 19:00:25.798690 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82v59" event={"ID":"44a7dac5-5de3-47b2-a14e-7e62193ec589","Type":"ContainerStarted","Data":"c7afa48c2f37dcaabb9382660ac0a93e5e5326236cb4719f1f444f4095c568b4"} Dec 11 19:00:25 crc kubenswrapper[4877]: I1211 19:00:25.817521 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-82v59" podStartSLOduration=4.019225727 podStartE2EDuration="6.817502582s" podCreationTimestamp="2025-12-11 19:00:19 +0000 UTC" firstStartedPulling="2025-12-11 19:00:22.766666526 +0000 UTC m=+3583.792910570" lastFinishedPulling="2025-12-11 19:00:25.564943381 +0000 UTC m=+3586.591187425" observedRunningTime="2025-12-11 19:00:25.815429337 +0000 UTC m=+3586.841673401" watchObservedRunningTime="2025-12-11 19:00:25.817502582 +0000 UTC m=+3586.843746636" Dec 11 19:00:29 crc kubenswrapper[4877]: I1211 19:00:29.897432 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:29 crc kubenswrapper[4877]: I1211 19:00:29.898107 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:29 crc kubenswrapper[4877]: I1211 19:00:29.951181 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:30 crc kubenswrapper[4877]: I1211 19:00:30.905217 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:30 crc kubenswrapper[4877]: I1211 19:00:30.949741 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-82v59"] Dec 11 19:00:32 crc kubenswrapper[4877]: I1211 19:00:32.858785 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-82v59" podUID="44a7dac5-5de3-47b2-a14e-7e62193ec589" containerName="registry-server" containerID="cri-o://c7afa48c2f37dcaabb9382660ac0a93e5e5326236cb4719f1f444f4095c568b4" gracePeriod=2 Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.354893 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.542552 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44a7dac5-5de3-47b2-a14e-7e62193ec589-catalog-content\") pod \"44a7dac5-5de3-47b2-a14e-7e62193ec589\" (UID: \"44a7dac5-5de3-47b2-a14e-7e62193ec589\") " Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.542735 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44a7dac5-5de3-47b2-a14e-7e62193ec589-utilities\") pod \"44a7dac5-5de3-47b2-a14e-7e62193ec589\" (UID: \"44a7dac5-5de3-47b2-a14e-7e62193ec589\") " Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.542812 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6bhl\" (UniqueName: \"kubernetes.io/projected/44a7dac5-5de3-47b2-a14e-7e62193ec589-kube-api-access-t6bhl\") pod \"44a7dac5-5de3-47b2-a14e-7e62193ec589\" (UID: \"44a7dac5-5de3-47b2-a14e-7e62193ec589\") " Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.543585 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44a7dac5-5de3-47b2-a14e-7e62193ec589-utilities" (OuterVolumeSpecName: "utilities") pod "44a7dac5-5de3-47b2-a14e-7e62193ec589" (UID: "44a7dac5-5de3-47b2-a14e-7e62193ec589"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.548161 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a7dac5-5de3-47b2-a14e-7e62193ec589-kube-api-access-t6bhl" (OuterVolumeSpecName: "kube-api-access-t6bhl") pod "44a7dac5-5de3-47b2-a14e-7e62193ec589" (UID: "44a7dac5-5de3-47b2-a14e-7e62193ec589"). InnerVolumeSpecName "kube-api-access-t6bhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.594673 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44a7dac5-5de3-47b2-a14e-7e62193ec589-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44a7dac5-5de3-47b2-a14e-7e62193ec589" (UID: "44a7dac5-5de3-47b2-a14e-7e62193ec589"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.645289 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44a7dac5-5de3-47b2-a14e-7e62193ec589-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.645328 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44a7dac5-5de3-47b2-a14e-7e62193ec589-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.645339 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6bhl\" (UniqueName: \"kubernetes.io/projected/44a7dac5-5de3-47b2-a14e-7e62193ec589-kube-api-access-t6bhl\") on node \"crc\" DevicePath \"\"" Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.870025 4877 generic.go:334] "Generic (PLEG): container finished" podID="44a7dac5-5de3-47b2-a14e-7e62193ec589" containerID="c7afa48c2f37dcaabb9382660ac0a93e5e5326236cb4719f1f444f4095c568b4" exitCode=0 Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.870076 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82v59" event={"ID":"44a7dac5-5de3-47b2-a14e-7e62193ec589","Type":"ContainerDied","Data":"c7afa48c2f37dcaabb9382660ac0a93e5e5326236cb4719f1f444f4095c568b4"} Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.870107 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82v59" event={"ID":"44a7dac5-5de3-47b2-a14e-7e62193ec589","Type":"ContainerDied","Data":"cbe5991cc62a34fe1031e1009757f27c17d0278a7c54bfdf707ffc0b7c94317e"} Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.870126 4877 scope.go:117] "RemoveContainer" containerID="c7afa48c2f37dcaabb9382660ac0a93e5e5326236cb4719f1f444f4095c568b4" Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.870290 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82v59" Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.916881 4877 scope.go:117] "RemoveContainer" containerID="7d13e968aeca9868175913bb3753f861503ca6931acb69d335693b0a0113013f" Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.918817 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-82v59"] Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.950522 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-82v59"] Dec 11 19:00:33 crc kubenswrapper[4877]: I1211 19:00:33.969416 4877 scope.go:117] "RemoveContainer" containerID="f05d0f2d18c8f63359945c4e77388eba576419d64f84ee099fad9213b0a518b2" Dec 11 19:00:34 crc kubenswrapper[4877]: I1211 19:00:34.008970 4877 scope.go:117] "RemoveContainer" containerID="c7afa48c2f37dcaabb9382660ac0a93e5e5326236cb4719f1f444f4095c568b4" Dec 11 19:00:34 crc kubenswrapper[4877]: E1211 19:00:34.009527 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7afa48c2f37dcaabb9382660ac0a93e5e5326236cb4719f1f444f4095c568b4\": container with ID starting with c7afa48c2f37dcaabb9382660ac0a93e5e5326236cb4719f1f444f4095c568b4 not found: ID does not exist" containerID="c7afa48c2f37dcaabb9382660ac0a93e5e5326236cb4719f1f444f4095c568b4" Dec 11 19:00:34 crc kubenswrapper[4877]: I1211 19:00:34.009639 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7afa48c2f37dcaabb9382660ac0a93e5e5326236cb4719f1f444f4095c568b4"} err="failed to get container status \"c7afa48c2f37dcaabb9382660ac0a93e5e5326236cb4719f1f444f4095c568b4\": rpc error: code = NotFound desc = could not find container \"c7afa48c2f37dcaabb9382660ac0a93e5e5326236cb4719f1f444f4095c568b4\": container with ID starting with c7afa48c2f37dcaabb9382660ac0a93e5e5326236cb4719f1f444f4095c568b4 not found: ID does not exist" Dec 11 19:00:34 crc kubenswrapper[4877]: I1211 19:00:34.009732 4877 scope.go:117] "RemoveContainer" containerID="7d13e968aeca9868175913bb3753f861503ca6931acb69d335693b0a0113013f" Dec 11 19:00:34 crc kubenswrapper[4877]: E1211 19:00:34.010053 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d13e968aeca9868175913bb3753f861503ca6931acb69d335693b0a0113013f\": container with ID starting with 7d13e968aeca9868175913bb3753f861503ca6931acb69d335693b0a0113013f not found: ID does not exist" containerID="7d13e968aeca9868175913bb3753f861503ca6931acb69d335693b0a0113013f" Dec 11 19:00:34 crc kubenswrapper[4877]: I1211 19:00:34.010094 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d13e968aeca9868175913bb3753f861503ca6931acb69d335693b0a0113013f"} err="failed to get container status \"7d13e968aeca9868175913bb3753f861503ca6931acb69d335693b0a0113013f\": rpc error: code = NotFound desc = could not find container \"7d13e968aeca9868175913bb3753f861503ca6931acb69d335693b0a0113013f\": container with ID starting with 7d13e968aeca9868175913bb3753f861503ca6931acb69d335693b0a0113013f not found: ID does not exist" Dec 11 19:00:34 crc kubenswrapper[4877]: I1211 19:00:34.010121 4877 scope.go:117] "RemoveContainer" containerID="f05d0f2d18c8f63359945c4e77388eba576419d64f84ee099fad9213b0a518b2" Dec 11 19:00:34 crc kubenswrapper[4877]: E1211 19:00:34.010399 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f05d0f2d18c8f63359945c4e77388eba576419d64f84ee099fad9213b0a518b2\": container with ID starting with f05d0f2d18c8f63359945c4e77388eba576419d64f84ee099fad9213b0a518b2 not found: ID does not exist" containerID="f05d0f2d18c8f63359945c4e77388eba576419d64f84ee099fad9213b0a518b2" Dec 11 19:00:34 crc kubenswrapper[4877]: I1211 19:00:34.010434 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f05d0f2d18c8f63359945c4e77388eba576419d64f84ee099fad9213b0a518b2"} err="failed to get container status \"f05d0f2d18c8f63359945c4e77388eba576419d64f84ee099fad9213b0a518b2\": rpc error: code = NotFound desc = could not find container \"f05d0f2d18c8f63359945c4e77388eba576419d64f84ee099fad9213b0a518b2\": container with ID starting with f05d0f2d18c8f63359945c4e77388eba576419d64f84ee099fad9213b0a518b2 not found: ID does not exist" Dec 11 19:00:35 crc kubenswrapper[4877]: I1211 19:00:35.235022 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44a7dac5-5de3-47b2-a14e-7e62193ec589" path="/var/lib/kubelet/pods/44a7dac5-5de3-47b2-a14e-7e62193ec589/volumes" Dec 11 19:00:45 crc kubenswrapper[4877]: E1211 19:00:45.223771 4877 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0a1d4c7_45fe_4644_9c71_b2e839813b50.slice/crio-conmon-dd6187e6c4dc0172e07cb359f8e77e92ce24627996931cf3fb47a0d05188b107.scope\": RecentStats: unable to find data in memory cache]" Dec 11 19:00:46 crc kubenswrapper[4877]: I1211 19:00:46.024863 4877 generic.go:334] "Generic (PLEG): container finished" podID="f0a1d4c7-45fe-4644-9c71-b2e839813b50" containerID="dd6187e6c4dc0172e07cb359f8e77e92ce24627996931cf3fb47a0d05188b107" exitCode=0 Dec 11 19:00:46 crc kubenswrapper[4877]: I1211 19:00:46.024918 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ssbsl/crc-debug-ntr7t" event={"ID":"f0a1d4c7-45fe-4644-9c71-b2e839813b50","Type":"ContainerDied","Data":"dd6187e6c4dc0172e07cb359f8e77e92ce24627996931cf3fb47a0d05188b107"} Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.079996 4877 scope.go:117] "RemoveContainer" containerID="2c9f87cc10cff146b2b99b785816b59d105aa588dabf57b51cac2649b90e7c2a" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.232720 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/crc-debug-ntr7t" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.272733 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ssbsl/crc-debug-ntr7t"] Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.279728 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ssbsl/crc-debug-ntr7t"] Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.313294 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dblpd\" (UniqueName: \"kubernetes.io/projected/f0a1d4c7-45fe-4644-9c71-b2e839813b50-kube-api-access-dblpd\") pod \"f0a1d4c7-45fe-4644-9c71-b2e839813b50\" (UID: \"f0a1d4c7-45fe-4644-9c71-b2e839813b50\") " Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.313496 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0a1d4c7-45fe-4644-9c71-b2e839813b50-host\") pod \"f0a1d4c7-45fe-4644-9c71-b2e839813b50\" (UID: \"f0a1d4c7-45fe-4644-9c71-b2e839813b50\") " Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.313675 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0a1d4c7-45fe-4644-9c71-b2e839813b50-host" (OuterVolumeSpecName: "host") pod "f0a1d4c7-45fe-4644-9c71-b2e839813b50" (UID: "f0a1d4c7-45fe-4644-9c71-b2e839813b50"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.314001 4877 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0a1d4c7-45fe-4644-9c71-b2e839813b50-host\") on node \"crc\" DevicePath \"\"" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.327047 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z4xj8"] Dec 11 19:00:47 crc kubenswrapper[4877]: E1211 19:00:47.327520 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a7dac5-5de3-47b2-a14e-7e62193ec589" containerName="extract-utilities" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.327555 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a7dac5-5de3-47b2-a14e-7e62193ec589" containerName="extract-utilities" Dec 11 19:00:47 crc kubenswrapper[4877]: E1211 19:00:47.327590 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0a1d4c7-45fe-4644-9c71-b2e839813b50" containerName="container-00" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.327597 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0a1d4c7-45fe-4644-9c71-b2e839813b50" containerName="container-00" Dec 11 19:00:47 crc kubenswrapper[4877]: E1211 19:00:47.327608 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a7dac5-5de3-47b2-a14e-7e62193ec589" containerName="extract-content" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.327616 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a7dac5-5de3-47b2-a14e-7e62193ec589" containerName="extract-content" Dec 11 19:00:47 crc kubenswrapper[4877]: E1211 19:00:47.327635 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a7dac5-5de3-47b2-a14e-7e62193ec589" containerName="registry-server" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.327643 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a7dac5-5de3-47b2-a14e-7e62193ec589" containerName="registry-server" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.327881 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0a1d4c7-45fe-4644-9c71-b2e839813b50" containerName="container-00" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.327922 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a7dac5-5de3-47b2-a14e-7e62193ec589" containerName="registry-server" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.329896 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.342396 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z4xj8"] Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.381493 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0a1d4c7-45fe-4644-9c71-b2e839813b50-kube-api-access-dblpd" (OuterVolumeSpecName: "kube-api-access-dblpd") pod "f0a1d4c7-45fe-4644-9c71-b2e839813b50" (UID: "f0a1d4c7-45fe-4644-9c71-b2e839813b50"). InnerVolumeSpecName "kube-api-access-dblpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.415611 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77af7ca2-5ccf-4362-aef4-e11ea357282f-catalog-content\") pod \"redhat-operators-z4xj8\" (UID: \"77af7ca2-5ccf-4362-aef4-e11ea357282f\") " pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.416059 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kb4z\" (UniqueName: \"kubernetes.io/projected/77af7ca2-5ccf-4362-aef4-e11ea357282f-kube-api-access-2kb4z\") pod \"redhat-operators-z4xj8\" (UID: \"77af7ca2-5ccf-4362-aef4-e11ea357282f\") " pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.416190 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77af7ca2-5ccf-4362-aef4-e11ea357282f-utilities\") pod \"redhat-operators-z4xj8\" (UID: \"77af7ca2-5ccf-4362-aef4-e11ea357282f\") " pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.416244 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dblpd\" (UniqueName: \"kubernetes.io/projected/f0a1d4c7-45fe-4644-9c71-b2e839813b50-kube-api-access-dblpd\") on node \"crc\" DevicePath \"\"" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.517220 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kb4z\" (UniqueName: \"kubernetes.io/projected/77af7ca2-5ccf-4362-aef4-e11ea357282f-kube-api-access-2kb4z\") pod \"redhat-operators-z4xj8\" (UID: \"77af7ca2-5ccf-4362-aef4-e11ea357282f\") " pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.517675 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77af7ca2-5ccf-4362-aef4-e11ea357282f-utilities\") pod \"redhat-operators-z4xj8\" (UID: \"77af7ca2-5ccf-4362-aef4-e11ea357282f\") " pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.517721 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77af7ca2-5ccf-4362-aef4-e11ea357282f-catalog-content\") pod \"redhat-operators-z4xj8\" (UID: \"77af7ca2-5ccf-4362-aef4-e11ea357282f\") " pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.518235 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77af7ca2-5ccf-4362-aef4-e11ea357282f-utilities\") pod \"redhat-operators-z4xj8\" (UID: \"77af7ca2-5ccf-4362-aef4-e11ea357282f\") " pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.518311 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77af7ca2-5ccf-4362-aef4-e11ea357282f-catalog-content\") pod \"redhat-operators-z4xj8\" (UID: \"77af7ca2-5ccf-4362-aef4-e11ea357282f\") " pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.538606 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kb4z\" (UniqueName: \"kubernetes.io/projected/77af7ca2-5ccf-4362-aef4-e11ea357282f-kube-api-access-2kb4z\") pod \"redhat-operators-z4xj8\" (UID: \"77af7ca2-5ccf-4362-aef4-e11ea357282f\") " pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:47 crc kubenswrapper[4877]: I1211 19:00:47.705671 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:48 crc kubenswrapper[4877]: I1211 19:00:48.059980 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d7b92aa92cb451a298ece1630c554135222e0cf2ff0b574f0647312be9e4197" Dec 11 19:00:48 crc kubenswrapper[4877]: I1211 19:00:48.060034 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/crc-debug-ntr7t" Dec 11 19:00:48 crc kubenswrapper[4877]: I1211 19:00:48.144417 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z4xj8"] Dec 11 19:00:48 crc kubenswrapper[4877]: I1211 19:00:48.577547 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ssbsl/crc-debug-ld5pr"] Dec 11 19:00:48 crc kubenswrapper[4877]: I1211 19:00:48.578910 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/crc-debug-ld5pr" Dec 11 19:00:48 crc kubenswrapper[4877]: I1211 19:00:48.741312 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8kvh\" (UniqueName: \"kubernetes.io/projected/776e6523-b11a-461c-95a7-6135cc5593a1-kube-api-access-k8kvh\") pod \"crc-debug-ld5pr\" (UID: \"776e6523-b11a-461c-95a7-6135cc5593a1\") " pod="openshift-must-gather-ssbsl/crc-debug-ld5pr" Dec 11 19:00:48 crc kubenswrapper[4877]: I1211 19:00:48.741419 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/776e6523-b11a-461c-95a7-6135cc5593a1-host\") pod \"crc-debug-ld5pr\" (UID: \"776e6523-b11a-461c-95a7-6135cc5593a1\") " pod="openshift-must-gather-ssbsl/crc-debug-ld5pr" Dec 11 19:00:48 crc kubenswrapper[4877]: I1211 19:00:48.843363 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/776e6523-b11a-461c-95a7-6135cc5593a1-host\") pod \"crc-debug-ld5pr\" (UID: \"776e6523-b11a-461c-95a7-6135cc5593a1\") " pod="openshift-must-gather-ssbsl/crc-debug-ld5pr" Dec 11 19:00:48 crc kubenswrapper[4877]: I1211 19:00:48.843641 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/776e6523-b11a-461c-95a7-6135cc5593a1-host\") pod \"crc-debug-ld5pr\" (UID: \"776e6523-b11a-461c-95a7-6135cc5593a1\") " pod="openshift-must-gather-ssbsl/crc-debug-ld5pr" Dec 11 19:00:48 crc kubenswrapper[4877]: I1211 19:00:48.844329 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8kvh\" (UniqueName: \"kubernetes.io/projected/776e6523-b11a-461c-95a7-6135cc5593a1-kube-api-access-k8kvh\") pod \"crc-debug-ld5pr\" (UID: \"776e6523-b11a-461c-95a7-6135cc5593a1\") " pod="openshift-must-gather-ssbsl/crc-debug-ld5pr" Dec 11 19:00:48 crc kubenswrapper[4877]: I1211 19:00:48.872449 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8kvh\" (UniqueName: \"kubernetes.io/projected/776e6523-b11a-461c-95a7-6135cc5593a1-kube-api-access-k8kvh\") pod \"crc-debug-ld5pr\" (UID: \"776e6523-b11a-461c-95a7-6135cc5593a1\") " pod="openshift-must-gather-ssbsl/crc-debug-ld5pr" Dec 11 19:00:48 crc kubenswrapper[4877]: I1211 19:00:48.894902 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/crc-debug-ld5pr" Dec 11 19:00:48 crc kubenswrapper[4877]: W1211 19:00:48.940717 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod776e6523_b11a_461c_95a7_6135cc5593a1.slice/crio-7bbe3bca43113895575179753f775cb54940487107f96bb80a8020c34154151c WatchSource:0}: Error finding container 7bbe3bca43113895575179753f775cb54940487107f96bb80a8020c34154151c: Status 404 returned error can't find the container with id 7bbe3bca43113895575179753f775cb54940487107f96bb80a8020c34154151c Dec 11 19:00:49 crc kubenswrapper[4877]: I1211 19:00:49.076661 4877 generic.go:334] "Generic (PLEG): container finished" podID="77af7ca2-5ccf-4362-aef4-e11ea357282f" containerID="4212176de5866cb08769c5453dbf37a367f34c1fc2c125b3687424d73e1bedd3" exitCode=0 Dec 11 19:00:49 crc kubenswrapper[4877]: I1211 19:00:49.076717 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4xj8" event={"ID":"77af7ca2-5ccf-4362-aef4-e11ea357282f","Type":"ContainerDied","Data":"4212176de5866cb08769c5453dbf37a367f34c1fc2c125b3687424d73e1bedd3"} Dec 11 19:00:49 crc kubenswrapper[4877]: I1211 19:00:49.076761 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4xj8" event={"ID":"77af7ca2-5ccf-4362-aef4-e11ea357282f","Type":"ContainerStarted","Data":"12e7acd7c956bcd4ce8afe89f5269d7914e90b986bff6c06848bbf0ddfd2bc62"} Dec 11 19:00:49 crc kubenswrapper[4877]: I1211 19:00:49.078201 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ssbsl/crc-debug-ld5pr" event={"ID":"776e6523-b11a-461c-95a7-6135cc5593a1","Type":"ContainerStarted","Data":"7bbe3bca43113895575179753f775cb54940487107f96bb80a8020c34154151c"} Dec 11 19:00:49 crc kubenswrapper[4877]: I1211 19:00:49.237911 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0a1d4c7-45fe-4644-9c71-b2e839813b50" path="/var/lib/kubelet/pods/f0a1d4c7-45fe-4644-9c71-b2e839813b50/volumes" Dec 11 19:00:50 crc kubenswrapper[4877]: I1211 19:00:50.094871 4877 generic.go:334] "Generic (PLEG): container finished" podID="776e6523-b11a-461c-95a7-6135cc5593a1" containerID="46d7964a90e977c57934e61db7b592053e6117040aff6421bd4898567076fb4e" exitCode=0 Dec 11 19:00:50 crc kubenswrapper[4877]: I1211 19:00:50.094961 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ssbsl/crc-debug-ld5pr" event={"ID":"776e6523-b11a-461c-95a7-6135cc5593a1","Type":"ContainerDied","Data":"46d7964a90e977c57934e61db7b592053e6117040aff6421bd4898567076fb4e"} Dec 11 19:00:50 crc kubenswrapper[4877]: I1211 19:00:50.717116 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ssbsl/crc-debug-ld5pr"] Dec 11 19:00:50 crc kubenswrapper[4877]: I1211 19:00:50.724331 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ssbsl/crc-debug-ld5pr"] Dec 11 19:00:51 crc kubenswrapper[4877]: I1211 19:00:51.112748 4877 generic.go:334] "Generic (PLEG): container finished" podID="77af7ca2-5ccf-4362-aef4-e11ea357282f" containerID="24066b7100a912a29771920ad0be67a7bd959a919e409d169882deae50c692c0" exitCode=0 Dec 11 19:00:51 crc kubenswrapper[4877]: I1211 19:00:51.112949 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4xj8" event={"ID":"77af7ca2-5ccf-4362-aef4-e11ea357282f","Type":"ContainerDied","Data":"24066b7100a912a29771920ad0be67a7bd959a919e409d169882deae50c692c0"} Dec 11 19:00:51 crc kubenswrapper[4877]: I1211 19:00:51.226449 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/crc-debug-ld5pr" Dec 11 19:00:51 crc kubenswrapper[4877]: I1211 19:00:51.397575 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/776e6523-b11a-461c-95a7-6135cc5593a1-host\") pod \"776e6523-b11a-461c-95a7-6135cc5593a1\" (UID: \"776e6523-b11a-461c-95a7-6135cc5593a1\") " Dec 11 19:00:51 crc kubenswrapper[4877]: I1211 19:00:51.397916 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8kvh\" (UniqueName: \"kubernetes.io/projected/776e6523-b11a-461c-95a7-6135cc5593a1-kube-api-access-k8kvh\") pod \"776e6523-b11a-461c-95a7-6135cc5593a1\" (UID: \"776e6523-b11a-461c-95a7-6135cc5593a1\") " Dec 11 19:00:51 crc kubenswrapper[4877]: I1211 19:00:51.397678 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/776e6523-b11a-461c-95a7-6135cc5593a1-host" (OuterVolumeSpecName: "host") pod "776e6523-b11a-461c-95a7-6135cc5593a1" (UID: "776e6523-b11a-461c-95a7-6135cc5593a1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 19:00:51 crc kubenswrapper[4877]: I1211 19:00:51.398433 4877 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/776e6523-b11a-461c-95a7-6135cc5593a1-host\") on node \"crc\" DevicePath \"\"" Dec 11 19:00:51 crc kubenswrapper[4877]: I1211 19:00:51.408511 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/776e6523-b11a-461c-95a7-6135cc5593a1-kube-api-access-k8kvh" (OuterVolumeSpecName: "kube-api-access-k8kvh") pod "776e6523-b11a-461c-95a7-6135cc5593a1" (UID: "776e6523-b11a-461c-95a7-6135cc5593a1"). InnerVolumeSpecName "kube-api-access-k8kvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:00:51 crc kubenswrapper[4877]: I1211 19:00:51.500733 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8kvh\" (UniqueName: \"kubernetes.io/projected/776e6523-b11a-461c-95a7-6135cc5593a1-kube-api-access-k8kvh\") on node \"crc\" DevicePath \"\"" Dec 11 19:00:51 crc kubenswrapper[4877]: I1211 19:00:51.899282 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ssbsl/crc-debug-jwt7p"] Dec 11 19:00:51 crc kubenswrapper[4877]: E1211 19:00:51.899750 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776e6523-b11a-461c-95a7-6135cc5593a1" containerName="container-00" Dec 11 19:00:51 crc kubenswrapper[4877]: I1211 19:00:51.899766 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="776e6523-b11a-461c-95a7-6135cc5593a1" containerName="container-00" Dec 11 19:00:51 crc kubenswrapper[4877]: I1211 19:00:51.899982 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="776e6523-b11a-461c-95a7-6135cc5593a1" containerName="container-00" Dec 11 19:00:51 crc kubenswrapper[4877]: I1211 19:00:51.900642 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/crc-debug-jwt7p" Dec 11 19:00:52 crc kubenswrapper[4877]: I1211 19:00:52.012986 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bc3bdf0-d057-460f-a880-73c112e3149d-host\") pod \"crc-debug-jwt7p\" (UID: \"6bc3bdf0-d057-460f-a880-73c112e3149d\") " pod="openshift-must-gather-ssbsl/crc-debug-jwt7p" Dec 11 19:00:52 crc kubenswrapper[4877]: I1211 19:00:52.013054 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xlt2\" (UniqueName: \"kubernetes.io/projected/6bc3bdf0-d057-460f-a880-73c112e3149d-kube-api-access-8xlt2\") pod \"crc-debug-jwt7p\" (UID: \"6bc3bdf0-d057-460f-a880-73c112e3149d\") " pod="openshift-must-gather-ssbsl/crc-debug-jwt7p" Dec 11 19:00:52 crc kubenswrapper[4877]: I1211 19:00:52.116238 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bc3bdf0-d057-460f-a880-73c112e3149d-host\") pod \"crc-debug-jwt7p\" (UID: \"6bc3bdf0-d057-460f-a880-73c112e3149d\") " pod="openshift-must-gather-ssbsl/crc-debug-jwt7p" Dec 11 19:00:52 crc kubenswrapper[4877]: I1211 19:00:52.116286 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xlt2\" (UniqueName: \"kubernetes.io/projected/6bc3bdf0-d057-460f-a880-73c112e3149d-kube-api-access-8xlt2\") pod \"crc-debug-jwt7p\" (UID: \"6bc3bdf0-d057-460f-a880-73c112e3149d\") " pod="openshift-must-gather-ssbsl/crc-debug-jwt7p" Dec 11 19:00:52 crc kubenswrapper[4877]: I1211 19:00:52.116447 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bc3bdf0-d057-460f-a880-73c112e3149d-host\") pod \"crc-debug-jwt7p\" (UID: \"6bc3bdf0-d057-460f-a880-73c112e3149d\") " pod="openshift-must-gather-ssbsl/crc-debug-jwt7p" Dec 11 19:00:52 crc kubenswrapper[4877]: I1211 19:00:52.127389 4877 scope.go:117] "RemoveContainer" containerID="46d7964a90e977c57934e61db7b592053e6117040aff6421bd4898567076fb4e" Dec 11 19:00:52 crc kubenswrapper[4877]: I1211 19:00:52.127739 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/crc-debug-ld5pr" Dec 11 19:00:52 crc kubenswrapper[4877]: I1211 19:00:52.138089 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xlt2\" (UniqueName: \"kubernetes.io/projected/6bc3bdf0-d057-460f-a880-73c112e3149d-kube-api-access-8xlt2\") pod \"crc-debug-jwt7p\" (UID: \"6bc3bdf0-d057-460f-a880-73c112e3149d\") " pod="openshift-must-gather-ssbsl/crc-debug-jwt7p" Dec 11 19:00:52 crc kubenswrapper[4877]: I1211 19:00:52.227602 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/crc-debug-jwt7p" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.142399 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4xj8" event={"ID":"77af7ca2-5ccf-4362-aef4-e11ea357282f","Type":"ContainerStarted","Data":"4e0953789af235c968da663974ea42b1f792aa63a55fdeff8bebee74f23bbbaf"} Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.147405 4877 generic.go:334] "Generic (PLEG): container finished" podID="6bc3bdf0-d057-460f-a880-73c112e3149d" containerID="e59e2e29ab872446422a2df5606807e78cc4acd5829083d15c2482dcbd08cc64" exitCode=0 Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.147460 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ssbsl/crc-debug-jwt7p" event={"ID":"6bc3bdf0-d057-460f-a880-73c112e3149d","Type":"ContainerDied","Data":"e59e2e29ab872446422a2df5606807e78cc4acd5829083d15c2482dcbd08cc64"} Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.147542 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ssbsl/crc-debug-jwt7p" event={"ID":"6bc3bdf0-d057-460f-a880-73c112e3149d","Type":"ContainerStarted","Data":"9ab083e2b231c699439a1bacccc1be62741ae7efef58b70a6edbf1baa4c46b71"} Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.174726 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z4xj8" podStartSLOduration=3.608627527 podStartE2EDuration="6.174709081s" podCreationTimestamp="2025-12-11 19:00:47 +0000 UTC" firstStartedPulling="2025-12-11 19:00:49.090791046 +0000 UTC m=+3610.117035130" lastFinishedPulling="2025-12-11 19:00:51.65687262 +0000 UTC m=+3612.683116684" observedRunningTime="2025-12-11 19:00:53.165071936 +0000 UTC m=+3614.191315980" watchObservedRunningTime="2025-12-11 19:00:53.174709081 +0000 UTC m=+3614.200953125" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.212906 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ssbsl/crc-debug-jwt7p"] Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.229644 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="776e6523-b11a-461c-95a7-6135cc5593a1" path="/var/lib/kubelet/pods/776e6523-b11a-461c-95a7-6135cc5593a1/volumes" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.231071 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ssbsl/crc-debug-jwt7p"] Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.715326 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n6cs8"] Dec 11 19:00:53 crc kubenswrapper[4877]: E1211 19:00:53.716030 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc3bdf0-d057-460f-a880-73c112e3149d" containerName="container-00" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.716069 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc3bdf0-d057-460f-a880-73c112e3149d" containerName="container-00" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.716512 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bc3bdf0-d057-460f-a880-73c112e3149d" containerName="container-00" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.722794 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.745018 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6cs8"] Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.850733 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ktzr\" (UniqueName: \"kubernetes.io/projected/16170ddf-9859-49aa-a1e7-7e89cb00a13e-kube-api-access-7ktzr\") pod \"redhat-marketplace-n6cs8\" (UID: \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\") " pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.850834 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16170ddf-9859-49aa-a1e7-7e89cb00a13e-utilities\") pod \"redhat-marketplace-n6cs8\" (UID: \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\") " pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.851052 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16170ddf-9859-49aa-a1e7-7e89cb00a13e-catalog-content\") pod \"redhat-marketplace-n6cs8\" (UID: \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\") " pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.952754 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16170ddf-9859-49aa-a1e7-7e89cb00a13e-utilities\") pod \"redhat-marketplace-n6cs8\" (UID: \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\") " pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.952900 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16170ddf-9859-49aa-a1e7-7e89cb00a13e-catalog-content\") pod \"redhat-marketplace-n6cs8\" (UID: \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\") " pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.953002 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ktzr\" (UniqueName: \"kubernetes.io/projected/16170ddf-9859-49aa-a1e7-7e89cb00a13e-kube-api-access-7ktzr\") pod \"redhat-marketplace-n6cs8\" (UID: \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\") " pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.953388 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16170ddf-9859-49aa-a1e7-7e89cb00a13e-utilities\") pod \"redhat-marketplace-n6cs8\" (UID: \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\") " pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.953475 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16170ddf-9859-49aa-a1e7-7e89cb00a13e-catalog-content\") pod \"redhat-marketplace-n6cs8\" (UID: \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\") " pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:00:53 crc kubenswrapper[4877]: I1211 19:00:53.982596 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ktzr\" (UniqueName: \"kubernetes.io/projected/16170ddf-9859-49aa-a1e7-7e89cb00a13e-kube-api-access-7ktzr\") pod \"redhat-marketplace-n6cs8\" (UID: \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\") " pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:00:54 crc kubenswrapper[4877]: I1211 19:00:54.054128 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:00:54 crc kubenswrapper[4877]: I1211 19:00:54.321456 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/crc-debug-jwt7p" Dec 11 19:00:54 crc kubenswrapper[4877]: I1211 19:00:54.400503 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xlt2\" (UniqueName: \"kubernetes.io/projected/6bc3bdf0-d057-460f-a880-73c112e3149d-kube-api-access-8xlt2\") pod \"6bc3bdf0-d057-460f-a880-73c112e3149d\" (UID: \"6bc3bdf0-d057-460f-a880-73c112e3149d\") " Dec 11 19:00:54 crc kubenswrapper[4877]: I1211 19:00:54.400944 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bc3bdf0-d057-460f-a880-73c112e3149d-host\") pod \"6bc3bdf0-d057-460f-a880-73c112e3149d\" (UID: \"6bc3bdf0-d057-460f-a880-73c112e3149d\") " Dec 11 19:00:54 crc kubenswrapper[4877]: I1211 19:00:54.401436 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bc3bdf0-d057-460f-a880-73c112e3149d-host" (OuterVolumeSpecName: "host") pod "6bc3bdf0-d057-460f-a880-73c112e3149d" (UID: "6bc3bdf0-d057-460f-a880-73c112e3149d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 19:00:54 crc kubenswrapper[4877]: I1211 19:00:54.401653 4877 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6bc3bdf0-d057-460f-a880-73c112e3149d-host\") on node \"crc\" DevicePath \"\"" Dec 11 19:00:54 crc kubenswrapper[4877]: I1211 19:00:54.407639 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc3bdf0-d057-460f-a880-73c112e3149d-kube-api-access-8xlt2" (OuterVolumeSpecName: "kube-api-access-8xlt2") pod "6bc3bdf0-d057-460f-a880-73c112e3149d" (UID: "6bc3bdf0-d057-460f-a880-73c112e3149d"). InnerVolumeSpecName "kube-api-access-8xlt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:00:54 crc kubenswrapper[4877]: I1211 19:00:54.503588 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xlt2\" (UniqueName: \"kubernetes.io/projected/6bc3bdf0-d057-460f-a880-73c112e3149d-kube-api-access-8xlt2\") on node \"crc\" DevicePath \"\"" Dec 11 19:00:54 crc kubenswrapper[4877]: I1211 19:00:54.540574 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6cs8"] Dec 11 19:00:55 crc kubenswrapper[4877]: I1211 19:00:55.217470 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/crc-debug-jwt7p" Dec 11 19:00:55 crc kubenswrapper[4877]: I1211 19:00:55.226013 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bc3bdf0-d057-460f-a880-73c112e3149d" path="/var/lib/kubelet/pods/6bc3bdf0-d057-460f-a880-73c112e3149d/volumes" Dec 11 19:00:55 crc kubenswrapper[4877]: I1211 19:00:55.227279 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6cs8" event={"ID":"16170ddf-9859-49aa-a1e7-7e89cb00a13e","Type":"ContainerStarted","Data":"3bd46e3cfb652948eba9e187d4dc405a7569fa0c15439979608eab93c3ed7e68"} Dec 11 19:00:55 crc kubenswrapper[4877]: I1211 19:00:55.227324 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6cs8" event={"ID":"16170ddf-9859-49aa-a1e7-7e89cb00a13e","Type":"ContainerStarted","Data":"a4f0ebcfbaf4c98ce7e2c0eb96d786de4a54c49e8b60eaee1bce5a1f7321bcaf"} Dec 11 19:00:55 crc kubenswrapper[4877]: I1211 19:00:55.227337 4877 scope.go:117] "RemoveContainer" containerID="e59e2e29ab872446422a2df5606807e78cc4acd5829083d15c2482dcbd08cc64" Dec 11 19:00:56 crc kubenswrapper[4877]: I1211 19:00:56.229099 4877 generic.go:334] "Generic (PLEG): container finished" podID="16170ddf-9859-49aa-a1e7-7e89cb00a13e" containerID="3bd46e3cfb652948eba9e187d4dc405a7569fa0c15439979608eab93c3ed7e68" exitCode=0 Dec 11 19:00:56 crc kubenswrapper[4877]: I1211 19:00:56.229303 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6cs8" event={"ID":"16170ddf-9859-49aa-a1e7-7e89cb00a13e","Type":"ContainerDied","Data":"3bd46e3cfb652948eba9e187d4dc405a7569fa0c15439979608eab93c3ed7e68"} Dec 11 19:00:57 crc kubenswrapper[4877]: I1211 19:00:57.240195 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6cs8" event={"ID":"16170ddf-9859-49aa-a1e7-7e89cb00a13e","Type":"ContainerStarted","Data":"7afa657ec821b6845983b8b62bbbb186c02adab4b88f0130060f8fa82ecaa9ae"} Dec 11 19:00:57 crc kubenswrapper[4877]: I1211 19:00:57.705774 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:57 crc kubenswrapper[4877]: I1211 19:00:57.706987 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:57 crc kubenswrapper[4877]: I1211 19:00:57.754185 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:58 crc kubenswrapper[4877]: I1211 19:00:58.248276 4877 generic.go:334] "Generic (PLEG): container finished" podID="16170ddf-9859-49aa-a1e7-7e89cb00a13e" containerID="7afa657ec821b6845983b8b62bbbb186c02adab4b88f0130060f8fa82ecaa9ae" exitCode=0 Dec 11 19:00:58 crc kubenswrapper[4877]: I1211 19:00:58.250546 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6cs8" event={"ID":"16170ddf-9859-49aa-a1e7-7e89cb00a13e","Type":"ContainerDied","Data":"7afa657ec821b6845983b8b62bbbb186c02adab4b88f0130060f8fa82ecaa9ae"} Dec 11 19:00:58 crc kubenswrapper[4877]: I1211 19:00:58.303804 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:00:59 crc kubenswrapper[4877]: I1211 19:00:59.259453 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6cs8" event={"ID":"16170ddf-9859-49aa-a1e7-7e89cb00a13e","Type":"ContainerStarted","Data":"be2adcfd555d2e3ea8a26797106982179c14a574fd59d8d3a7ab3f058da32ccb"} Dec 11 19:00:59 crc kubenswrapper[4877]: I1211 19:00:59.291397 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n6cs8" podStartSLOduration=3.737635806 podStartE2EDuration="6.291358441s" podCreationTimestamp="2025-12-11 19:00:53 +0000 UTC" firstStartedPulling="2025-12-11 19:00:56.231060965 +0000 UTC m=+3617.257305009" lastFinishedPulling="2025-12-11 19:00:58.78478358 +0000 UTC m=+3619.811027644" observedRunningTime="2025-12-11 19:00:59.282031934 +0000 UTC m=+3620.308276058" watchObservedRunningTime="2025-12-11 19:00:59.291358441 +0000 UTC m=+3620.317602505" Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.079310 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z4xj8"] Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.156273 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29424661-j6vcg"] Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.158143 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.171320 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29424661-j6vcg"] Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.210640 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-config-data\") pod \"keystone-cron-29424661-j6vcg\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.210897 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-fernet-keys\") pod \"keystone-cron-29424661-j6vcg\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.210987 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-combined-ca-bundle\") pod \"keystone-cron-29424661-j6vcg\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.211174 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmksl\" (UniqueName: \"kubernetes.io/projected/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-kube-api-access-vmksl\") pod \"keystone-cron-29424661-j6vcg\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.312734 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmksl\" (UniqueName: \"kubernetes.io/projected/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-kube-api-access-vmksl\") pod \"keystone-cron-29424661-j6vcg\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.313566 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-config-data\") pod \"keystone-cron-29424661-j6vcg\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.313761 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-fernet-keys\") pod \"keystone-cron-29424661-j6vcg\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.313892 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-combined-ca-bundle\") pod \"keystone-cron-29424661-j6vcg\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.319291 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-fernet-keys\") pod \"keystone-cron-29424661-j6vcg\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.320799 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-config-data\") pod \"keystone-cron-29424661-j6vcg\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.321200 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-combined-ca-bundle\") pod \"keystone-cron-29424661-j6vcg\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.333396 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmksl\" (UniqueName: \"kubernetes.io/projected/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-kube-api-access-vmksl\") pod \"keystone-cron-29424661-j6vcg\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:00 crc kubenswrapper[4877]: I1211 19:01:00.479341 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:01 crc kubenswrapper[4877]: I1211 19:01:01.007824 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29424661-j6vcg"] Dec 11 19:01:01 crc kubenswrapper[4877]: I1211 19:01:01.276687 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29424661-j6vcg" event={"ID":"cc6bcaa4-72c0-4cac-b05d-fe57d5086736","Type":"ContainerStarted","Data":"baffbb96e42bf59eceead77b4dfda6c8db2c016c0e0acd9f38886e9cc5ad9eed"} Dec 11 19:01:01 crc kubenswrapper[4877]: I1211 19:01:01.277130 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29424661-j6vcg" event={"ID":"cc6bcaa4-72c0-4cac-b05d-fe57d5086736","Type":"ContainerStarted","Data":"297ae5783cae676394d87c0466d652c36736cd3b5121d2238080f5983888d601"} Dec 11 19:01:01 crc kubenswrapper[4877]: I1211 19:01:01.276933 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z4xj8" podUID="77af7ca2-5ccf-4362-aef4-e11ea357282f" containerName="registry-server" containerID="cri-o://4e0953789af235c968da663974ea42b1f792aa63a55fdeff8bebee74f23bbbaf" gracePeriod=2 Dec 11 19:01:01 crc kubenswrapper[4877]: I1211 19:01:01.311254 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29424661-j6vcg" podStartSLOduration=1.311225943 podStartE2EDuration="1.311225943s" podCreationTimestamp="2025-12-11 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 19:01:01.297032937 +0000 UTC m=+3622.323276991" watchObservedRunningTime="2025-12-11 19:01:01.311225943 +0000 UTC m=+3622.337469997" Dec 11 19:01:01 crc kubenswrapper[4877]: I1211 19:01:01.808560 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:01:01 crc kubenswrapper[4877]: I1211 19:01:01.844821 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kb4z\" (UniqueName: \"kubernetes.io/projected/77af7ca2-5ccf-4362-aef4-e11ea357282f-kube-api-access-2kb4z\") pod \"77af7ca2-5ccf-4362-aef4-e11ea357282f\" (UID: \"77af7ca2-5ccf-4362-aef4-e11ea357282f\") " Dec 11 19:01:01 crc kubenswrapper[4877]: I1211 19:01:01.844886 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77af7ca2-5ccf-4362-aef4-e11ea357282f-catalog-content\") pod \"77af7ca2-5ccf-4362-aef4-e11ea357282f\" (UID: \"77af7ca2-5ccf-4362-aef4-e11ea357282f\") " Dec 11 19:01:01 crc kubenswrapper[4877]: I1211 19:01:01.844929 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77af7ca2-5ccf-4362-aef4-e11ea357282f-utilities\") pod \"77af7ca2-5ccf-4362-aef4-e11ea357282f\" (UID: \"77af7ca2-5ccf-4362-aef4-e11ea357282f\") " Dec 11 19:01:01 crc kubenswrapper[4877]: I1211 19:01:01.845748 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77af7ca2-5ccf-4362-aef4-e11ea357282f-utilities" (OuterVolumeSpecName: "utilities") pod "77af7ca2-5ccf-4362-aef4-e11ea357282f" (UID: "77af7ca2-5ccf-4362-aef4-e11ea357282f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 19:01:01 crc kubenswrapper[4877]: I1211 19:01:01.858049 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77af7ca2-5ccf-4362-aef4-e11ea357282f-kube-api-access-2kb4z" (OuterVolumeSpecName: "kube-api-access-2kb4z") pod "77af7ca2-5ccf-4362-aef4-e11ea357282f" (UID: "77af7ca2-5ccf-4362-aef4-e11ea357282f"). InnerVolumeSpecName "kube-api-access-2kb4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:01:01 crc kubenswrapper[4877]: I1211 19:01:01.947564 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77af7ca2-5ccf-4362-aef4-e11ea357282f-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 19:01:01 crc kubenswrapper[4877]: I1211 19:01:01.947851 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kb4z\" (UniqueName: \"kubernetes.io/projected/77af7ca2-5ccf-4362-aef4-e11ea357282f-kube-api-access-2kb4z\") on node \"crc\" DevicePath \"\"" Dec 11 19:01:01 crc kubenswrapper[4877]: I1211 19:01:01.995806 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77af7ca2-5ccf-4362-aef4-e11ea357282f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77af7ca2-5ccf-4362-aef4-e11ea357282f" (UID: "77af7ca2-5ccf-4362-aef4-e11ea357282f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.050454 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77af7ca2-5ccf-4362-aef4-e11ea357282f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.287156 4877 generic.go:334] "Generic (PLEG): container finished" podID="77af7ca2-5ccf-4362-aef4-e11ea357282f" containerID="4e0953789af235c968da663974ea42b1f792aa63a55fdeff8bebee74f23bbbaf" exitCode=0 Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.287215 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4xj8" event={"ID":"77af7ca2-5ccf-4362-aef4-e11ea357282f","Type":"ContainerDied","Data":"4e0953789af235c968da663974ea42b1f792aa63a55fdeff8bebee74f23bbbaf"} Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.287234 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4xj8" Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.287264 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4xj8" event={"ID":"77af7ca2-5ccf-4362-aef4-e11ea357282f","Type":"ContainerDied","Data":"12e7acd7c956bcd4ce8afe89f5269d7914e90b986bff6c06848bbf0ddfd2bc62"} Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.287283 4877 scope.go:117] "RemoveContainer" containerID="4e0953789af235c968da663974ea42b1f792aa63a55fdeff8bebee74f23bbbaf" Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.327052 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z4xj8"] Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.341759 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z4xj8"] Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.343253 4877 scope.go:117] "RemoveContainer" containerID="24066b7100a912a29771920ad0be67a7bd959a919e409d169882deae50c692c0" Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.389837 4877 scope.go:117] "RemoveContainer" containerID="4212176de5866cb08769c5453dbf37a367f34c1fc2c125b3687424d73e1bedd3" Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.439361 4877 scope.go:117] "RemoveContainer" containerID="4e0953789af235c968da663974ea42b1f792aa63a55fdeff8bebee74f23bbbaf" Dec 11 19:01:02 crc kubenswrapper[4877]: E1211 19:01:02.439884 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e0953789af235c968da663974ea42b1f792aa63a55fdeff8bebee74f23bbbaf\": container with ID starting with 4e0953789af235c968da663974ea42b1f792aa63a55fdeff8bebee74f23bbbaf not found: ID does not exist" containerID="4e0953789af235c968da663974ea42b1f792aa63a55fdeff8bebee74f23bbbaf" Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.439932 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e0953789af235c968da663974ea42b1f792aa63a55fdeff8bebee74f23bbbaf"} err="failed to get container status \"4e0953789af235c968da663974ea42b1f792aa63a55fdeff8bebee74f23bbbaf\": rpc error: code = NotFound desc = could not find container \"4e0953789af235c968da663974ea42b1f792aa63a55fdeff8bebee74f23bbbaf\": container with ID starting with 4e0953789af235c968da663974ea42b1f792aa63a55fdeff8bebee74f23bbbaf not found: ID does not exist" Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.439963 4877 scope.go:117] "RemoveContainer" containerID="24066b7100a912a29771920ad0be67a7bd959a919e409d169882deae50c692c0" Dec 11 19:01:02 crc kubenswrapper[4877]: E1211 19:01:02.440305 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24066b7100a912a29771920ad0be67a7bd959a919e409d169882deae50c692c0\": container with ID starting with 24066b7100a912a29771920ad0be67a7bd959a919e409d169882deae50c692c0 not found: ID does not exist" containerID="24066b7100a912a29771920ad0be67a7bd959a919e409d169882deae50c692c0" Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.440328 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24066b7100a912a29771920ad0be67a7bd959a919e409d169882deae50c692c0"} err="failed to get container status \"24066b7100a912a29771920ad0be67a7bd959a919e409d169882deae50c692c0\": rpc error: code = NotFound desc = could not find container \"24066b7100a912a29771920ad0be67a7bd959a919e409d169882deae50c692c0\": container with ID starting with 24066b7100a912a29771920ad0be67a7bd959a919e409d169882deae50c692c0 not found: ID does not exist" Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.440343 4877 scope.go:117] "RemoveContainer" containerID="4212176de5866cb08769c5453dbf37a367f34c1fc2c125b3687424d73e1bedd3" Dec 11 19:01:02 crc kubenswrapper[4877]: E1211 19:01:02.440561 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4212176de5866cb08769c5453dbf37a367f34c1fc2c125b3687424d73e1bedd3\": container with ID starting with 4212176de5866cb08769c5453dbf37a367f34c1fc2c125b3687424d73e1bedd3 not found: ID does not exist" containerID="4212176de5866cb08769c5453dbf37a367f34c1fc2c125b3687424d73e1bedd3" Dec 11 19:01:02 crc kubenswrapper[4877]: I1211 19:01:02.440596 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4212176de5866cb08769c5453dbf37a367f34c1fc2c125b3687424d73e1bedd3"} err="failed to get container status \"4212176de5866cb08769c5453dbf37a367f34c1fc2c125b3687424d73e1bedd3\": rpc error: code = NotFound desc = could not find container \"4212176de5866cb08769c5453dbf37a367f34c1fc2c125b3687424d73e1bedd3\": container with ID starting with 4212176de5866cb08769c5453dbf37a367f34c1fc2c125b3687424d73e1bedd3 not found: ID does not exist" Dec 11 19:01:03 crc kubenswrapper[4877]: I1211 19:01:03.225307 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77af7ca2-5ccf-4362-aef4-e11ea357282f" path="/var/lib/kubelet/pods/77af7ca2-5ccf-4362-aef4-e11ea357282f/volumes" Dec 11 19:01:04 crc kubenswrapper[4877]: I1211 19:01:04.054724 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:01:04 crc kubenswrapper[4877]: I1211 19:01:04.055151 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:01:04 crc kubenswrapper[4877]: I1211 19:01:04.111116 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:01:04 crc kubenswrapper[4877]: I1211 19:01:04.306902 4877 generic.go:334] "Generic (PLEG): container finished" podID="cc6bcaa4-72c0-4cac-b05d-fe57d5086736" containerID="baffbb96e42bf59eceead77b4dfda6c8db2c016c0e0acd9f38886e9cc5ad9eed" exitCode=0 Dec 11 19:01:04 crc kubenswrapper[4877]: I1211 19:01:04.307198 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29424661-j6vcg" event={"ID":"cc6bcaa4-72c0-4cac-b05d-fe57d5086736","Type":"ContainerDied","Data":"baffbb96e42bf59eceead77b4dfda6c8db2c016c0e0acd9f38886e9cc5ad9eed"} Dec 11 19:01:04 crc kubenswrapper[4877]: I1211 19:01:04.355080 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:01:05 crc kubenswrapper[4877]: I1211 19:01:05.077715 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6cs8"] Dec 11 19:01:05 crc kubenswrapper[4877]: I1211 19:01:05.677439 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:05 crc kubenswrapper[4877]: I1211 19:01:05.727500 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-config-data\") pod \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " Dec 11 19:01:05 crc kubenswrapper[4877]: I1211 19:01:05.727603 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmksl\" (UniqueName: \"kubernetes.io/projected/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-kube-api-access-vmksl\") pod \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " Dec 11 19:01:05 crc kubenswrapper[4877]: I1211 19:01:05.727670 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-combined-ca-bundle\") pod \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " Dec 11 19:01:05 crc kubenswrapper[4877]: I1211 19:01:05.727708 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-fernet-keys\") pod \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\" (UID: \"cc6bcaa4-72c0-4cac-b05d-fe57d5086736\") " Dec 11 19:01:05 crc kubenswrapper[4877]: I1211 19:01:05.736252 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-kube-api-access-vmksl" (OuterVolumeSpecName: "kube-api-access-vmksl") pod "cc6bcaa4-72c0-4cac-b05d-fe57d5086736" (UID: "cc6bcaa4-72c0-4cac-b05d-fe57d5086736"). InnerVolumeSpecName "kube-api-access-vmksl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:01:05 crc kubenswrapper[4877]: I1211 19:01:05.754204 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cc6bcaa4-72c0-4cac-b05d-fe57d5086736" (UID: "cc6bcaa4-72c0-4cac-b05d-fe57d5086736"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 19:01:05 crc kubenswrapper[4877]: I1211 19:01:05.794788 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-config-data" (OuterVolumeSpecName: "config-data") pod "cc6bcaa4-72c0-4cac-b05d-fe57d5086736" (UID: "cc6bcaa4-72c0-4cac-b05d-fe57d5086736"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 19:01:05 crc kubenswrapper[4877]: I1211 19:01:05.802581 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc6bcaa4-72c0-4cac-b05d-fe57d5086736" (UID: "cc6bcaa4-72c0-4cac-b05d-fe57d5086736"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 19:01:05 crc kubenswrapper[4877]: I1211 19:01:05.833343 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmksl\" (UniqueName: \"kubernetes.io/projected/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-kube-api-access-vmksl\") on node \"crc\" DevicePath \"\"" Dec 11 19:01:05 crc kubenswrapper[4877]: I1211 19:01:05.833389 4877 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 19:01:05 crc kubenswrapper[4877]: I1211 19:01:05.833400 4877 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 19:01:05 crc kubenswrapper[4877]: I1211 19:01:05.833408 4877 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6bcaa4-72c0-4cac-b05d-fe57d5086736-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 19:01:06 crc kubenswrapper[4877]: I1211 19:01:06.327245 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29424661-j6vcg" event={"ID":"cc6bcaa4-72c0-4cac-b05d-fe57d5086736","Type":"ContainerDied","Data":"297ae5783cae676394d87c0466d652c36736cd3b5121d2238080f5983888d601"} Dec 11 19:01:06 crc kubenswrapper[4877]: I1211 19:01:06.327692 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="297ae5783cae676394d87c0466d652c36736cd3b5121d2238080f5983888d601" Dec 11 19:01:06 crc kubenswrapper[4877]: I1211 19:01:06.327505 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n6cs8" podUID="16170ddf-9859-49aa-a1e7-7e89cb00a13e" containerName="registry-server" containerID="cri-o://be2adcfd555d2e3ea8a26797106982179c14a574fd59d8d3a7ab3f058da32ccb" gracePeriod=2 Dec 11 19:01:06 crc kubenswrapper[4877]: I1211 19:01:06.327324 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29424661-j6vcg" Dec 11 19:01:06 crc kubenswrapper[4877]: I1211 19:01:06.910991 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.068757 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16170ddf-9859-49aa-a1e7-7e89cb00a13e-utilities\") pod \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\" (UID: \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\") " Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.068864 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16170ddf-9859-49aa-a1e7-7e89cb00a13e-catalog-content\") pod \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\" (UID: \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\") " Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.068903 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ktzr\" (UniqueName: \"kubernetes.io/projected/16170ddf-9859-49aa-a1e7-7e89cb00a13e-kube-api-access-7ktzr\") pod \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\" (UID: \"16170ddf-9859-49aa-a1e7-7e89cb00a13e\") " Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.070069 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16170ddf-9859-49aa-a1e7-7e89cb00a13e-utilities" (OuterVolumeSpecName: "utilities") pod "16170ddf-9859-49aa-a1e7-7e89cb00a13e" (UID: "16170ddf-9859-49aa-a1e7-7e89cb00a13e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.076666 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16170ddf-9859-49aa-a1e7-7e89cb00a13e-kube-api-access-7ktzr" (OuterVolumeSpecName: "kube-api-access-7ktzr") pod "16170ddf-9859-49aa-a1e7-7e89cb00a13e" (UID: "16170ddf-9859-49aa-a1e7-7e89cb00a13e"). InnerVolumeSpecName "kube-api-access-7ktzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.089893 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16170ddf-9859-49aa-a1e7-7e89cb00a13e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16170ddf-9859-49aa-a1e7-7e89cb00a13e" (UID: "16170ddf-9859-49aa-a1e7-7e89cb00a13e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.170404 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ktzr\" (UniqueName: \"kubernetes.io/projected/16170ddf-9859-49aa-a1e7-7e89cb00a13e-kube-api-access-7ktzr\") on node \"crc\" DevicePath \"\"" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.170440 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16170ddf-9859-49aa-a1e7-7e89cb00a13e-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.170449 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16170ddf-9859-49aa-a1e7-7e89cb00a13e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.338826 4877 generic.go:334] "Generic (PLEG): container finished" podID="16170ddf-9859-49aa-a1e7-7e89cb00a13e" containerID="be2adcfd555d2e3ea8a26797106982179c14a574fd59d8d3a7ab3f058da32ccb" exitCode=0 Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.338853 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6cs8" event={"ID":"16170ddf-9859-49aa-a1e7-7e89cb00a13e","Type":"ContainerDied","Data":"be2adcfd555d2e3ea8a26797106982179c14a574fd59d8d3a7ab3f058da32ccb"} Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.338906 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6cs8" event={"ID":"16170ddf-9859-49aa-a1e7-7e89cb00a13e","Type":"ContainerDied","Data":"a4f0ebcfbaf4c98ce7e2c0eb96d786de4a54c49e8b60eaee1bce5a1f7321bcaf"} Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.338923 4877 scope.go:117] "RemoveContainer" containerID="be2adcfd555d2e3ea8a26797106982179c14a574fd59d8d3a7ab3f058da32ccb" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.339054 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6cs8" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.372111 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6cs8"] Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.373296 4877 scope.go:117] "RemoveContainer" containerID="7afa657ec821b6845983b8b62bbbb186c02adab4b88f0130060f8fa82ecaa9ae" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.386187 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6cs8"] Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.405088 4877 scope.go:117] "RemoveContainer" containerID="3bd46e3cfb652948eba9e187d4dc405a7569fa0c15439979608eab93c3ed7e68" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.468948 4877 scope.go:117] "RemoveContainer" containerID="be2adcfd555d2e3ea8a26797106982179c14a574fd59d8d3a7ab3f058da32ccb" Dec 11 19:01:07 crc kubenswrapper[4877]: E1211 19:01:07.469944 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be2adcfd555d2e3ea8a26797106982179c14a574fd59d8d3a7ab3f058da32ccb\": container with ID starting with be2adcfd555d2e3ea8a26797106982179c14a574fd59d8d3a7ab3f058da32ccb not found: ID does not exist" containerID="be2adcfd555d2e3ea8a26797106982179c14a574fd59d8d3a7ab3f058da32ccb" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.470007 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be2adcfd555d2e3ea8a26797106982179c14a574fd59d8d3a7ab3f058da32ccb"} err="failed to get container status \"be2adcfd555d2e3ea8a26797106982179c14a574fd59d8d3a7ab3f058da32ccb\": rpc error: code = NotFound desc = could not find container \"be2adcfd555d2e3ea8a26797106982179c14a574fd59d8d3a7ab3f058da32ccb\": container with ID starting with be2adcfd555d2e3ea8a26797106982179c14a574fd59d8d3a7ab3f058da32ccb not found: ID does not exist" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.470048 4877 scope.go:117] "RemoveContainer" containerID="7afa657ec821b6845983b8b62bbbb186c02adab4b88f0130060f8fa82ecaa9ae" Dec 11 19:01:07 crc kubenswrapper[4877]: E1211 19:01:07.471047 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7afa657ec821b6845983b8b62bbbb186c02adab4b88f0130060f8fa82ecaa9ae\": container with ID starting with 7afa657ec821b6845983b8b62bbbb186c02adab4b88f0130060f8fa82ecaa9ae not found: ID does not exist" containerID="7afa657ec821b6845983b8b62bbbb186c02adab4b88f0130060f8fa82ecaa9ae" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.471105 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7afa657ec821b6845983b8b62bbbb186c02adab4b88f0130060f8fa82ecaa9ae"} err="failed to get container status \"7afa657ec821b6845983b8b62bbbb186c02adab4b88f0130060f8fa82ecaa9ae\": rpc error: code = NotFound desc = could not find container \"7afa657ec821b6845983b8b62bbbb186c02adab4b88f0130060f8fa82ecaa9ae\": container with ID starting with 7afa657ec821b6845983b8b62bbbb186c02adab4b88f0130060f8fa82ecaa9ae not found: ID does not exist" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.471146 4877 scope.go:117] "RemoveContainer" containerID="3bd46e3cfb652948eba9e187d4dc405a7569fa0c15439979608eab93c3ed7e68" Dec 11 19:01:07 crc kubenswrapper[4877]: E1211 19:01:07.471625 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bd46e3cfb652948eba9e187d4dc405a7569fa0c15439979608eab93c3ed7e68\": container with ID starting with 3bd46e3cfb652948eba9e187d4dc405a7569fa0c15439979608eab93c3ed7e68 not found: ID does not exist" containerID="3bd46e3cfb652948eba9e187d4dc405a7569fa0c15439979608eab93c3ed7e68" Dec 11 19:01:07 crc kubenswrapper[4877]: I1211 19:01:07.471672 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bd46e3cfb652948eba9e187d4dc405a7569fa0c15439979608eab93c3ed7e68"} err="failed to get container status \"3bd46e3cfb652948eba9e187d4dc405a7569fa0c15439979608eab93c3ed7e68\": rpc error: code = NotFound desc = could not find container \"3bd46e3cfb652948eba9e187d4dc405a7569fa0c15439979608eab93c3ed7e68\": container with ID starting with 3bd46e3cfb652948eba9e187d4dc405a7569fa0c15439979608eab93c3ed7e68 not found: ID does not exist" Dec 11 19:01:09 crc kubenswrapper[4877]: I1211 19:01:09.225636 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16170ddf-9859-49aa-a1e7-7e89cb00a13e" path="/var/lib/kubelet/pods/16170ddf-9859-49aa-a1e7-7e89cb00a13e/volumes" Dec 11 19:01:10 crc kubenswrapper[4877]: I1211 19:01:10.564733 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f986c9df4-vbvbf_a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03/barbican-api/0.log" Dec 11 19:01:10 crc kubenswrapper[4877]: I1211 19:01:10.727446 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f986c9df4-vbvbf_a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03/barbican-api-log/0.log" Dec 11 19:01:10 crc kubenswrapper[4877]: I1211 19:01:10.758387 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-55b59dbf9b-n74fk_60930296-787e-4fea-8180-8b7d3aba29b8/barbican-keystone-listener/0.log" Dec 11 19:01:10 crc kubenswrapper[4877]: I1211 19:01:10.827971 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-55b59dbf9b-n74fk_60930296-787e-4fea-8180-8b7d3aba29b8/barbican-keystone-listener-log/0.log" Dec 11 19:01:11 crc kubenswrapper[4877]: I1211 19:01:11.115045 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-85794d5dd7-rmjkp_79f3b97f-f3f1-4547-81e4-e2c7c833745e/barbican-worker-log/0.log" Dec 11 19:01:11 crc kubenswrapper[4877]: I1211 19:01:11.128583 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-85794d5dd7-rmjkp_79f3b97f-f3f1-4547-81e4-e2c7c833745e/barbican-worker/0.log" Dec 11 19:01:11 crc kubenswrapper[4877]: I1211 19:01:11.345056 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2235a50-9478-4081-bdad-597e59773901/ceilometer-central-agent/0.log" Dec 11 19:01:11 crc kubenswrapper[4877]: I1211 19:01:11.347580 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k_3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:11 crc kubenswrapper[4877]: I1211 19:01:11.391526 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2235a50-9478-4081-bdad-597e59773901/ceilometer-notification-agent/0.log" Dec 11 19:01:11 crc kubenswrapper[4877]: I1211 19:01:11.501266 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2235a50-9478-4081-bdad-597e59773901/sg-core/0.log" Dec 11 19:01:11 crc kubenswrapper[4877]: I1211 19:01:11.523684 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2235a50-9478-4081-bdad-597e59773901/proxy-httpd/0.log" Dec 11 19:01:11 crc kubenswrapper[4877]: I1211 19:01:11.652550 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fcf8e591-86a6-4c17-89a0-9d93ec7bb590/cinder-api/0.log" Dec 11 19:01:11 crc kubenswrapper[4877]: I1211 19:01:11.746655 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fcf8e591-86a6-4c17-89a0-9d93ec7bb590/cinder-api-log/0.log" Dec 11 19:01:11 crc kubenswrapper[4877]: I1211 19:01:11.821218 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8/cinder-scheduler/0.log" Dec 11 19:01:11 crc kubenswrapper[4877]: I1211 19:01:11.881190 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8/probe/0.log" Dec 11 19:01:12 crc kubenswrapper[4877]: I1211 19:01:12.010650 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-4twd7_ba5e024b-8ec8-4214-bca2-9dbf57f69623/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:12 crc kubenswrapper[4877]: I1211 19:01:12.095915 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-js6ck_728cbe41-aead-4492-bed9-312b93b70b88/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:12 crc kubenswrapper[4877]: I1211 19:01:12.226133 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-b6n82_aa89614b-79d3-467a-8b6a-0e5e28606a1a/init/0.log" Dec 11 19:01:12 crc kubenswrapper[4877]: I1211 19:01:12.446323 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-b6n82_aa89614b-79d3-467a-8b6a-0e5e28606a1a/dnsmasq-dns/0.log" Dec 11 19:01:12 crc kubenswrapper[4877]: I1211 19:01:12.449482 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-b6n82_aa89614b-79d3-467a-8b6a-0e5e28606a1a/init/0.log" Dec 11 19:01:12 crc kubenswrapper[4877]: I1211 19:01:12.540798 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp_2450d804-2d74-4d93-8a06-95190b0c8e94/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:12 crc kubenswrapper[4877]: I1211 19:01:12.643586 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a8011cc1-2d08-433e-bc2b-71f11aa75cd2/glance-httpd/0.log" Dec 11 19:01:12 crc kubenswrapper[4877]: I1211 19:01:12.671097 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a8011cc1-2d08-433e-bc2b-71f11aa75cd2/glance-log/0.log" Dec 11 19:01:12 crc kubenswrapper[4877]: I1211 19:01:12.828423 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_11c62194-8ad1-4529-98d8-7ad070a3ac30/glance-httpd/0.log" Dec 11 19:01:12 crc kubenswrapper[4877]: I1211 19:01:12.899567 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_11c62194-8ad1-4529-98d8-7ad070a3ac30/glance-log/0.log" Dec 11 19:01:13 crc kubenswrapper[4877]: I1211 19:01:13.048360 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-c9dbfd97b-ck4jv_2afc51b6-dafc-47ce-875a-3a6249f69b47/horizon/0.log" Dec 11 19:01:13 crc kubenswrapper[4877]: I1211 19:01:13.134461 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-f27jv_9dd6596b-9571-4ce0-8658-78d5f99fbb5a/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:13 crc kubenswrapper[4877]: I1211 19:01:13.366694 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-c68cr_cd857578-73bd-4b2b-b7ba-0b6a7058b48e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:13 crc kubenswrapper[4877]: I1211 19:01:13.367511 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-c9dbfd97b-ck4jv_2afc51b6-dafc-47ce-875a-3a6249f69b47/horizon-log/0.log" Dec 11 19:01:13 crc kubenswrapper[4877]: I1211 19:01:13.628443 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29424661-j6vcg_cc6bcaa4-72c0-4cac-b05d-fe57d5086736/keystone-cron/0.log" Dec 11 19:01:13 crc kubenswrapper[4877]: I1211 19:01:13.658440 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-c7754d7b9-8ngjr_68a8f0df-c9a5-4812-860f-492cfeeae4bb/keystone-api/0.log" Dec 11 19:01:13 crc kubenswrapper[4877]: I1211 19:01:13.809771 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_614a50f9-81ab-4bd7-a01d-f8074be6b773/kube-state-metrics/0.log" Dec 11 19:01:13 crc kubenswrapper[4877]: I1211 19:01:13.912037 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg_24092f15-2f1a-441e-a0b9-8bf295b95bd0/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:14 crc kubenswrapper[4877]: I1211 19:01:14.183101 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b4dd6dd9-6mv2v_4808e7d5-7e53-4b59-a46c-86838df224c0/neutron-api/0.log" Dec 11 19:01:14 crc kubenswrapper[4877]: I1211 19:01:14.398600 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b4dd6dd9-6mv2v_4808e7d5-7e53-4b59-a46c-86838df224c0/neutron-httpd/0.log" Dec 11 19:01:14 crc kubenswrapper[4877]: I1211 19:01:14.553191 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d_cb1dfedd-c524-4375-9b91-d9f87e34e2d0/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:15 crc kubenswrapper[4877]: I1211 19:01:15.077149 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_c8fc48ab-7a71-46a1-9557-65c034c6af7e/nova-cell0-conductor-conductor/0.log" Dec 11 19:01:15 crc kubenswrapper[4877]: I1211 19:01:15.121276 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fd82f9ba-3316-4498-b434-e0eea4518646/nova-api-log/0.log" Dec 11 19:01:15 crc kubenswrapper[4877]: I1211 19:01:15.334234 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fd82f9ba-3316-4498-b434-e0eea4518646/nova-api-api/0.log" Dec 11 19:01:15 crc kubenswrapper[4877]: I1211 19:01:15.347808 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6d9e1d47-035b-4789-91ca-61940c628347/nova-cell1-conductor-conductor/0.log" Dec 11 19:01:15 crc kubenswrapper[4877]: I1211 19:01:15.398607 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_9ab5e774-7a03-4065-9eb8-c68aaff8d6c6/nova-cell1-novncproxy-novncproxy/0.log" Dec 11 19:01:15 crc kubenswrapper[4877]: I1211 19:01:15.566443 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-6b22p_27668d56-a427-4392-85d2-4e4cc52342aa/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:15 crc kubenswrapper[4877]: I1211 19:01:15.792322 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_09959cf5-104d-4577-b6ae-d710a75c4aaf/nova-metadata-log/0.log" Dec 11 19:01:16 crc kubenswrapper[4877]: I1211 19:01:16.063876 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6/nova-scheduler-scheduler/0.log" Dec 11 19:01:16 crc kubenswrapper[4877]: I1211 19:01:16.067328 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ce9369cc-7934-4f85-9d10-e89f50e28710/mysql-bootstrap/0.log" Dec 11 19:01:16 crc kubenswrapper[4877]: I1211 19:01:16.257767 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ce9369cc-7934-4f85-9d10-e89f50e28710/galera/0.log" Dec 11 19:01:16 crc kubenswrapper[4877]: I1211 19:01:16.280801 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ce9369cc-7934-4f85-9d10-e89f50e28710/mysql-bootstrap/0.log" Dec 11 19:01:16 crc kubenswrapper[4877]: I1211 19:01:16.465855 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e433a730-179e-4edf-93a9-9468b1714468/mysql-bootstrap/0.log" Dec 11 19:01:16 crc kubenswrapper[4877]: I1211 19:01:16.657143 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e433a730-179e-4edf-93a9-9468b1714468/galera/0.log" Dec 11 19:01:16 crc kubenswrapper[4877]: I1211 19:01:16.657434 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e433a730-179e-4edf-93a9-9468b1714468/mysql-bootstrap/0.log" Dec 11 19:01:16 crc kubenswrapper[4877]: I1211 19:01:16.847499 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_22343eeb-fed7-457f-a507-a83d4071ee3a/openstackclient/0.log" Dec 11 19:01:16 crc kubenswrapper[4877]: I1211 19:01:16.895611 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_09959cf5-104d-4577-b6ae-d710a75c4aaf/nova-metadata-metadata/0.log" Dec 11 19:01:16 crc kubenswrapper[4877]: I1211 19:01:16.906043 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-f2twl_7600f73e-c321-4ec0-af52-684b7b75ec9f/openstack-network-exporter/0.log" Dec 11 19:01:17 crc kubenswrapper[4877]: I1211 19:01:17.129042 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6g4mt_2cfb93fc-8582-42dd-8c57-afd3fcd25b40/ovsdb-server-init/0.log" Dec 11 19:01:17 crc kubenswrapper[4877]: I1211 19:01:17.264035 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6g4mt_2cfb93fc-8582-42dd-8c57-afd3fcd25b40/ovs-vswitchd/0.log" Dec 11 19:01:17 crc kubenswrapper[4877]: I1211 19:01:17.346352 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6g4mt_2cfb93fc-8582-42dd-8c57-afd3fcd25b40/ovsdb-server-init/0.log" Dec 11 19:01:17 crc kubenswrapper[4877]: I1211 19:01:17.351547 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6g4mt_2cfb93fc-8582-42dd-8c57-afd3fcd25b40/ovsdb-server/0.log" Dec 11 19:01:17 crc kubenswrapper[4877]: I1211 19:01:17.500621 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zdz6c_efc5ef2c-fcea-4de5-a085-47ff35a33522/ovn-controller/0.log" Dec 11 19:01:17 crc kubenswrapper[4877]: I1211 19:01:17.637984 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-x5zzb_3346dff0-5931-4f19-817b-bc38011e3718/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:17 crc kubenswrapper[4877]: I1211 19:01:17.727018 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c38d6b86-ecb2-47de-a1f3-6670ff0eb78b/openstack-network-exporter/0.log" Dec 11 19:01:17 crc kubenswrapper[4877]: I1211 19:01:17.908256 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c38d6b86-ecb2-47de-a1f3-6670ff0eb78b/ovn-northd/0.log" Dec 11 19:01:18 crc kubenswrapper[4877]: I1211 19:01:18.011606 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b163a060-e7a9-4e81-992b-a9c72bbac544/openstack-network-exporter/0.log" Dec 11 19:01:18 crc kubenswrapper[4877]: I1211 19:01:18.068643 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b163a060-e7a9-4e81-992b-a9c72bbac544/ovsdbserver-nb/0.log" Dec 11 19:01:18 crc kubenswrapper[4877]: I1211 19:01:18.181135 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde/openstack-network-exporter/0.log" Dec 11 19:01:18 crc kubenswrapper[4877]: I1211 19:01:18.282831 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde/ovsdbserver-sb/0.log" Dec 11 19:01:18 crc kubenswrapper[4877]: I1211 19:01:18.386956 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77d988cd48-2h828_5f1de16a-c21b-4876-99ca-60b34d1b7e75/placement-api/0.log" Dec 11 19:01:18 crc kubenswrapper[4877]: I1211 19:01:18.488663 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77d988cd48-2h828_5f1de16a-c21b-4876-99ca-60b34d1b7e75/placement-log/0.log" Dec 11 19:01:18 crc kubenswrapper[4877]: I1211 19:01:18.556140 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9c488377-3b02-4126-b40d-6b8568352c77/setup-container/0.log" Dec 11 19:01:18 crc kubenswrapper[4877]: I1211 19:01:18.808542 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9c488377-3b02-4126-b40d-6b8568352c77/setup-container/0.log" Dec 11 19:01:18 crc kubenswrapper[4877]: I1211 19:01:18.809682 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9c488377-3b02-4126-b40d-6b8568352c77/rabbitmq/0.log" Dec 11 19:01:18 crc kubenswrapper[4877]: I1211 19:01:18.835868 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4e630b01-bd78-44dc-bdc6-82a0bad7825c/setup-container/0.log" Dec 11 19:01:19 crc kubenswrapper[4877]: I1211 19:01:19.017344 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4e630b01-bd78-44dc-bdc6-82a0bad7825c/rabbitmq/0.log" Dec 11 19:01:19 crc kubenswrapper[4877]: I1211 19:01:19.047607 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4e630b01-bd78-44dc-bdc6-82a0bad7825c/setup-container/0.log" Dec 11 19:01:19 crc kubenswrapper[4877]: I1211 19:01:19.087812 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs_c4fd09ef-4694-42ad-b7fa-17a721fec8f3/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:19 crc kubenswrapper[4877]: I1211 19:01:19.245327 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-wnw2w_153cc7ad-8854-4f42-80cd-2fcdb2f453cd/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:19 crc kubenswrapper[4877]: I1211 19:01:19.326309 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r_db547bdf-a5ee-410d-8a44-7bc5af05321d/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:19 crc kubenswrapper[4877]: I1211 19:01:19.472567 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-b4lh7_f6dc6f9d-c3d7-45da-98ca-e00538c9680e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:19 crc kubenswrapper[4877]: I1211 19:01:19.703907 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-b9jsb_a2ef07f1-1d19-403b-a68c-0092e8030adb/ssh-known-hosts-edpm-deployment/0.log" Dec 11 19:01:19 crc kubenswrapper[4877]: I1211 19:01:19.847866 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-79885c8c-7qj69_6776094e-cd5a-4539-9b5c-368030c70458/proxy-httpd/0.log" Dec 11 19:01:19 crc kubenswrapper[4877]: I1211 19:01:19.933912 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-79885c8c-7qj69_6776094e-cd5a-4539-9b5c-368030c70458/proxy-server/0.log" Dec 11 19:01:19 crc kubenswrapper[4877]: I1211 19:01:19.947340 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vdnmw_daa2f87b-1f8a-423e-88f1-17150ab15ba0/swift-ring-rebalance/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.110219 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/account-auditor/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.135847 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/account-reaper/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.239720 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/account-replicator/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.298466 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/container-auditor/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.328652 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/account-server/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.420508 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/container-replicator/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.499515 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/container-server/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.529849 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/container-updater/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.595622 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/object-auditor/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.629162 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/object-expirer/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.717484 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/object-server/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.741085 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/object-replicator/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.810481 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/object-updater/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.849933 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/rsync/0.log" Dec 11 19:01:20 crc kubenswrapper[4877]: I1211 19:01:20.971944 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/swift-recon-cron/0.log" Dec 11 19:01:21 crc kubenswrapper[4877]: I1211 19:01:21.157063 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_60eefae6-1396-4f0a-b52a-7827dca29fb3/tempest-tests-tempest-tests-runner/0.log" Dec 11 19:01:21 crc kubenswrapper[4877]: I1211 19:01:21.165950 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq_352625c8-a275-44c6-9758-962aa05194b1/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:21 crc kubenswrapper[4877]: I1211 19:01:21.485174 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_4dc36034-313e-409f-86f4-e69f9ae0ee24/test-operator-logs-container/0.log" Dec 11 19:01:21 crc kubenswrapper[4877]: I1211 19:01:21.539096 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5_18c92635-2d69-45d4-b25a-8a67a228e11c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:01:29 crc kubenswrapper[4877]: I1211 19:01:29.159992 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ee9c80e6-7afc-495d-85c0-9c3b64d26df5/memcached/0.log" Dec 11 19:01:42 crc kubenswrapper[4877]: I1211 19:01:42.649712 4877 generic.go:334] "Generic (PLEG): container finished" podID="53a860ae-4169-4f47-8ba7-032c96b4be3a" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" exitCode=1 Dec 11 19:01:42 crc kubenswrapper[4877]: I1211 19:01:42.650117 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerDied","Data":"e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68"} Dec 11 19:01:42 crc kubenswrapper[4877]: I1211 19:01:42.650150 4877 scope.go:117] "RemoveContainer" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 19:01:42 crc kubenswrapper[4877]: I1211 19:01:42.650823 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:01:42 crc kubenswrapper[4877]: E1211 19:01:42.651083 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:01:46 crc kubenswrapper[4877]: I1211 19:01:46.778738 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52_d106ca54-ea59-4bc1-9b2e-309981ea2055/util/0.log" Dec 11 19:01:47 crc kubenswrapper[4877]: I1211 19:01:47.094767 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52_d106ca54-ea59-4bc1-9b2e-309981ea2055/pull/0.log" Dec 11 19:01:47 crc kubenswrapper[4877]: I1211 19:01:47.107401 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52_d106ca54-ea59-4bc1-9b2e-309981ea2055/util/0.log" Dec 11 19:01:47 crc kubenswrapper[4877]: I1211 19:01:47.137999 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52_d106ca54-ea59-4bc1-9b2e-309981ea2055/pull/0.log" Dec 11 19:01:47 crc kubenswrapper[4877]: I1211 19:01:47.275846 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52_d106ca54-ea59-4bc1-9b2e-309981ea2055/util/0.log" Dec 11 19:01:47 crc kubenswrapper[4877]: I1211 19:01:47.289884 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52_d106ca54-ea59-4bc1-9b2e-309981ea2055/pull/0.log" Dec 11 19:01:47 crc kubenswrapper[4877]: I1211 19:01:47.322586 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52_d106ca54-ea59-4bc1-9b2e-309981ea2055/extract/0.log" Dec 11 19:01:47 crc kubenswrapper[4877]: I1211 19:01:47.446218 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-ddr9m_c3dd6849-836b-462c-abbc-d97418287658/kube-rbac-proxy/0.log" Dec 11 19:01:47 crc kubenswrapper[4877]: I1211 19:01:47.539023 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-h429c_444a3f0d-8828-4958-9d25-61f4251d74c4/kube-rbac-proxy/0.log" Dec 11 19:01:47 crc kubenswrapper[4877]: I1211 19:01:47.544463 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-ddr9m_c3dd6849-836b-462c-abbc-d97418287658/manager/0.log" Dec 11 19:01:47 crc kubenswrapper[4877]: I1211 19:01:47.650662 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-h429c_444a3f0d-8828-4958-9d25-61f4251d74c4/manager/0.log" Dec 11 19:01:47 crc kubenswrapper[4877]: I1211 19:01:47.717222 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-hsvv2_105c0e29-3d26-49ee-83f1-9ac47ec17cfd/manager/0.log" Dec 11 19:01:47 crc kubenswrapper[4877]: I1211 19:01:47.718987 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-hsvv2_105c0e29-3d26-49ee-83f1-9ac47ec17cfd/kube-rbac-proxy/0.log" Dec 11 19:01:47 crc kubenswrapper[4877]: I1211 19:01:47.854713 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-nf9dm_3670e0c0-f188-4f22-8097-52f0a00b3a47/kube-rbac-proxy/0.log" Dec 11 19:01:47 crc kubenswrapper[4877]: I1211 19:01:47.971752 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-nf9dm_3670e0c0-f188-4f22-8097-52f0a00b3a47/manager/0.log" Dec 11 19:01:48 crc kubenswrapper[4877]: I1211 19:01:48.012826 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-c4nfg_fd25a8fc-0f52-4795-8a65-debdfdf452b3/kube-rbac-proxy/0.log" Dec 11 19:01:48 crc kubenswrapper[4877]: I1211 19:01:48.044228 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-c4nfg_fd25a8fc-0f52-4795-8a65-debdfdf452b3/manager/0.log" Dec 11 19:01:48 crc kubenswrapper[4877]: I1211 19:01:48.126953 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-xtk9z_18dda364-66d0-47d3-8c03-4b0ecb73a634/kube-rbac-proxy/0.log" Dec 11 19:01:48 crc kubenswrapper[4877]: I1211 19:01:48.174610 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-xtk9z_18dda364-66d0-47d3-8c03-4b0ecb73a634/manager/0.log" Dec 11 19:01:48 crc kubenswrapper[4877]: I1211 19:01:48.286564 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6797f5b887-q9vgk_53a860ae-4169-4f47-8ba7-032c96b4be3a/kube-rbac-proxy/0.log" Dec 11 19:01:48 crc kubenswrapper[4877]: I1211 19:01:48.336641 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6797f5b887-q9vgk_53a860ae-4169-4f47-8ba7-032c96b4be3a/manager/9.log" Dec 11 19:01:48 crc kubenswrapper[4877]: E1211 19:01:48.358773 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1\": container with ID starting with 1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1 not found: ID does not exist" containerID="1a3d9490cdcfcb5aed490cde0f4bb74a6cc992f75d28302cbff324d0def665d1" Dec 11 19:01:48 crc kubenswrapper[4877]: I1211 19:01:48.481930 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-mcjpd_2fb4fbf5-e490-43ae-b7c5-8a2e481f7209/kube-rbac-proxy/0.log" Dec 11 19:01:48 crc kubenswrapper[4877]: I1211 19:01:48.491714 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-mcjpd_2fb4fbf5-e490-43ae-b7c5-8a2e481f7209/manager/0.log" Dec 11 19:01:48 crc kubenswrapper[4877]: I1211 19:01:48.653981 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-tlsrq_6dcd317e-41f9-45e8-bd14-77d9f4ae25dd/kube-rbac-proxy/0.log" Dec 11 19:01:48 crc kubenswrapper[4877]: I1211 19:01:48.736990 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-nprs8_e0378195-6809-4f5c-b9f3-a37177789ee5/kube-rbac-proxy/0.log" Dec 11 19:01:48 crc kubenswrapper[4877]: I1211 19:01:48.750705 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-tlsrq_6dcd317e-41f9-45e8-bd14-77d9f4ae25dd/manager/0.log" Dec 11 19:01:48 crc kubenswrapper[4877]: I1211 19:01:48.838038 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-nprs8_e0378195-6809-4f5c-b9f3-a37177789ee5/manager/0.log" Dec 11 19:01:48 crc kubenswrapper[4877]: I1211 19:01:48.926243 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-q25n5_f7fa49af-7b01-4972-aec4-5b2b42dee85f/kube-rbac-proxy/0.log" Dec 11 19:01:48 crc kubenswrapper[4877]: I1211 19:01:48.936632 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-q25n5_f7fa49af-7b01-4972-aec4-5b2b42dee85f/manager/0.log" Dec 11 19:01:49 crc kubenswrapper[4877]: I1211 19:01:49.058364 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-qn5w6_243bfab6-eced-4740-87ce-ab61441881f5/kube-rbac-proxy/0.log" Dec 11 19:01:49 crc kubenswrapper[4877]: I1211 19:01:49.142015 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-qn5w6_243bfab6-eced-4740-87ce-ab61441881f5/manager/0.log" Dec 11 19:01:49 crc kubenswrapper[4877]: I1211 19:01:49.229962 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-8vf65_099a6c32-cea0-4cea-b763-f60ba3e867e7/kube-rbac-proxy/0.log" Dec 11 19:01:49 crc kubenswrapper[4877]: I1211 19:01:49.328162 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-8vf65_099a6c32-cea0-4cea-b763-f60ba3e867e7/manager/0.log" Dec 11 19:01:49 crc kubenswrapper[4877]: I1211 19:01:49.392239 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-p97xh_efcf4499-fc58-4b4c-b047-c397b6154e38/kube-rbac-proxy/0.log" Dec 11 19:01:49 crc kubenswrapper[4877]: I1211 19:01:49.430328 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-p97xh_efcf4499-fc58-4b4c-b047-c397b6154e38/manager/0.log" Dec 11 19:01:49 crc kubenswrapper[4877]: I1211 19:01:49.535548 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f4zhrq_e525cb88-4985-4374-a7f8-185c016e4a14/kube-rbac-proxy/0.log" Dec 11 19:01:49 crc kubenswrapper[4877]: I1211 19:01:49.576022 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f4zhrq_e525cb88-4985-4374-a7f8-185c016e4a14/manager/0.log" Dec 11 19:01:49 crc kubenswrapper[4877]: I1211 19:01:49.963933 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nmcl7_05174a0d-198b-4dbc-847c-164453075d91/registry-server/0.log" Dec 11 19:01:50 crc kubenswrapper[4877]: I1211 19:01:50.183294 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-7pc7q_1ebbc540-13e3-4fee-a9b7-10bb95da50b9/kube-rbac-proxy/0.log" Dec 11 19:01:50 crc kubenswrapper[4877]: I1211 19:01:50.193254 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-8c9b75f7c-ccsvg_fcdb3a3a-e3a4-42ce-a44a-eb34c2eda169/operator/0.log" Dec 11 19:01:50 crc kubenswrapper[4877]: I1211 19:01:50.377924 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-7pc7q_1ebbc540-13e3-4fee-a9b7-10bb95da50b9/manager/0.log" Dec 11 19:01:50 crc kubenswrapper[4877]: I1211 19:01:50.413767 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-wldgs_8675dcf8-097e-4927-aa50-827f3034af41/kube-rbac-proxy/0.log" Dec 11 19:01:50 crc kubenswrapper[4877]: I1211 19:01:50.508568 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-wldgs_8675dcf8-097e-4927-aa50-827f3034af41/manager/0.log" Dec 11 19:01:50 crc kubenswrapper[4877]: I1211 19:01:50.682256 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7cg2z_15a74efd-e36a-4946-a9e5-2453c98355aa/operator/0.log" Dec 11 19:01:50 crc kubenswrapper[4877]: I1211 19:01:50.718227 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-dmlvl_ca3c6f4e-3491-4109-bf60-f4efbad58bc1/kube-rbac-proxy/0.log" Dec 11 19:01:50 crc kubenswrapper[4877]: I1211 19:01:50.850481 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-dmlvl_ca3c6f4e-3491-4109-bf60-f4efbad58bc1/manager/0.log" Dec 11 19:01:50 crc kubenswrapper[4877]: I1211 19:01:50.861740 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-545595b497-h5vf4_c21c4469-97a3-47c7-bced-d7d18aa1008a/manager/0.log" Dec 11 19:01:50 crc kubenswrapper[4877]: I1211 19:01:50.945269 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-d77ht_a571f5fa-fa44-48fd-b675-f0b42607ac7d/kube-rbac-proxy/0.log" Dec 11 19:01:51 crc kubenswrapper[4877]: I1211 19:01:51.031918 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-d77ht_a571f5fa-fa44-48fd-b675-f0b42607ac7d/manager/0.log" Dec 11 19:01:51 crc kubenswrapper[4877]: I1211 19:01:51.050284 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-9sw8b_105e535b-6aee-4187-9008-65a41e6e3572/kube-rbac-proxy/0.log" Dec 11 19:01:51 crc kubenswrapper[4877]: I1211 19:01:51.106381 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-9sw8b_105e535b-6aee-4187-9008-65a41e6e3572/manager/0.log" Dec 11 19:01:51 crc kubenswrapper[4877]: I1211 19:01:51.137817 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 19:01:51 crc kubenswrapper[4877]: I1211 19:01:51.138532 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:01:51 crc kubenswrapper[4877]: E1211 19:01:51.138759 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:01:51 crc kubenswrapper[4877]: I1211 19:01:51.195899 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-zm6ts_35acba78-7a75-40d7-b5fb-43d595c3bc1f/kube-rbac-proxy/0.log" Dec 11 19:01:51 crc kubenswrapper[4877]: I1211 19:01:51.252921 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-zm6ts_35acba78-7a75-40d7-b5fb-43d595c3bc1f/manager/0.log" Dec 11 19:02:01 crc kubenswrapper[4877]: I1211 19:02:01.137763 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 19:02:01 crc kubenswrapper[4877]: I1211 19:02:01.138985 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:02:01 crc kubenswrapper[4877]: E1211 19:02:01.139202 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.253487 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z6drv"] Dec 11 19:02:10 crc kubenswrapper[4877]: E1211 19:02:10.254489 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af7ca2-5ccf-4362-aef4-e11ea357282f" containerName="extract-content" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.254507 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af7ca2-5ccf-4362-aef4-e11ea357282f" containerName="extract-content" Dec 11 19:02:10 crc kubenswrapper[4877]: E1211 19:02:10.254530 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af7ca2-5ccf-4362-aef4-e11ea357282f" containerName="extract-utilities" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.254537 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af7ca2-5ccf-4362-aef4-e11ea357282f" containerName="extract-utilities" Dec 11 19:02:10 crc kubenswrapper[4877]: E1211 19:02:10.254554 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16170ddf-9859-49aa-a1e7-7e89cb00a13e" containerName="extract-utilities" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.254562 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="16170ddf-9859-49aa-a1e7-7e89cb00a13e" containerName="extract-utilities" Dec 11 19:02:10 crc kubenswrapper[4877]: E1211 19:02:10.254572 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af7ca2-5ccf-4362-aef4-e11ea357282f" containerName="registry-server" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.254580 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af7ca2-5ccf-4362-aef4-e11ea357282f" containerName="registry-server" Dec 11 19:02:10 crc kubenswrapper[4877]: E1211 19:02:10.254604 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16170ddf-9859-49aa-a1e7-7e89cb00a13e" containerName="registry-server" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.254611 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="16170ddf-9859-49aa-a1e7-7e89cb00a13e" containerName="registry-server" Dec 11 19:02:10 crc kubenswrapper[4877]: E1211 19:02:10.254632 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6bcaa4-72c0-4cac-b05d-fe57d5086736" containerName="keystone-cron" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.254641 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6bcaa4-72c0-4cac-b05d-fe57d5086736" containerName="keystone-cron" Dec 11 19:02:10 crc kubenswrapper[4877]: E1211 19:02:10.254655 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16170ddf-9859-49aa-a1e7-7e89cb00a13e" containerName="extract-content" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.254663 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="16170ddf-9859-49aa-a1e7-7e89cb00a13e" containerName="extract-content" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.254889 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="16170ddf-9859-49aa-a1e7-7e89cb00a13e" containerName="registry-server" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.254910 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="77af7ca2-5ccf-4362-aef4-e11ea357282f" containerName="registry-server" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.254932 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6bcaa4-72c0-4cac-b05d-fe57d5086736" containerName="keystone-cron" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.259851 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.299625 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z6drv"] Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.453752 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks5dk\" (UniqueName: \"kubernetes.io/projected/f173c053-534a-48b1-b515-85441479af6c-kube-api-access-ks5dk\") pod \"certified-operators-z6drv\" (UID: \"f173c053-534a-48b1-b515-85441479af6c\") " pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.454502 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f173c053-534a-48b1-b515-85441479af6c-catalog-content\") pod \"certified-operators-z6drv\" (UID: \"f173c053-534a-48b1-b515-85441479af6c\") " pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.454549 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f173c053-534a-48b1-b515-85441479af6c-utilities\") pod \"certified-operators-z6drv\" (UID: \"f173c053-534a-48b1-b515-85441479af6c\") " pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.556739 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f173c053-534a-48b1-b515-85441479af6c-catalog-content\") pod \"certified-operators-z6drv\" (UID: \"f173c053-534a-48b1-b515-85441479af6c\") " pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.556784 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f173c053-534a-48b1-b515-85441479af6c-utilities\") pod \"certified-operators-z6drv\" (UID: \"f173c053-534a-48b1-b515-85441479af6c\") " pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.556843 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks5dk\" (UniqueName: \"kubernetes.io/projected/f173c053-534a-48b1-b515-85441479af6c-kube-api-access-ks5dk\") pod \"certified-operators-z6drv\" (UID: \"f173c053-534a-48b1-b515-85441479af6c\") " pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.557251 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f173c053-534a-48b1-b515-85441479af6c-catalog-content\") pod \"certified-operators-z6drv\" (UID: \"f173c053-534a-48b1-b515-85441479af6c\") " pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.557293 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f173c053-534a-48b1-b515-85441479af6c-utilities\") pod \"certified-operators-z6drv\" (UID: \"f173c053-534a-48b1-b515-85441479af6c\") " pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.582220 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks5dk\" (UniqueName: \"kubernetes.io/projected/f173c053-534a-48b1-b515-85441479af6c-kube-api-access-ks5dk\") pod \"certified-operators-z6drv\" (UID: \"f173c053-534a-48b1-b515-85441479af6c\") " pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:10 crc kubenswrapper[4877]: I1211 19:02:10.595011 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:11 crc kubenswrapper[4877]: I1211 19:02:11.113533 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z6drv"] Dec 11 19:02:11 crc kubenswrapper[4877]: I1211 19:02:11.121910 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kntl9_98cc800c-b8b6-49f9-94c0-42bb0c22eb76/control-plane-machine-set-operator/0.log" Dec 11 19:02:11 crc kubenswrapper[4877]: I1211 19:02:11.696770 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4c92n_1e05b4cd-2477-4d69-804f-c8dc59d6da3d/kube-rbac-proxy/0.log" Dec 11 19:02:11 crc kubenswrapper[4877]: I1211 19:02:11.697047 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4c92n_1e05b4cd-2477-4d69-804f-c8dc59d6da3d/machine-api-operator/0.log" Dec 11 19:02:11 crc kubenswrapper[4877]: I1211 19:02:11.943004 4877 generic.go:334] "Generic (PLEG): container finished" podID="f173c053-534a-48b1-b515-85441479af6c" containerID="ed006d3d62a2a3e956f9c50412c62e2786cafdcd204bf547f9bc723aacb0869c" exitCode=0 Dec 11 19:02:11 crc kubenswrapper[4877]: I1211 19:02:11.943309 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6drv" event={"ID":"f173c053-534a-48b1-b515-85441479af6c","Type":"ContainerDied","Data":"ed006d3d62a2a3e956f9c50412c62e2786cafdcd204bf547f9bc723aacb0869c"} Dec 11 19:02:11 crc kubenswrapper[4877]: I1211 19:02:11.943458 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6drv" event={"ID":"f173c053-534a-48b1-b515-85441479af6c","Type":"ContainerStarted","Data":"62bc511336559d5bf3795baa4dfef4cc63880aa69be96d8479ee002eda757ed0"} Dec 11 19:02:13 crc kubenswrapper[4877]: I1211 19:02:13.963126 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6drv" event={"ID":"f173c053-534a-48b1-b515-85441479af6c","Type":"ContainerStarted","Data":"72c81a7c4d77a36239581b398b7c353ed734a53a71693e09736d44bd519ab61f"} Dec 11 19:02:14 crc kubenswrapper[4877]: I1211 19:02:14.974292 4877 generic.go:334] "Generic (PLEG): container finished" podID="f173c053-534a-48b1-b515-85441479af6c" containerID="72c81a7c4d77a36239581b398b7c353ed734a53a71693e09736d44bd519ab61f" exitCode=0 Dec 11 19:02:14 crc kubenswrapper[4877]: I1211 19:02:14.974370 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6drv" event={"ID":"f173c053-534a-48b1-b515-85441479af6c","Type":"ContainerDied","Data":"72c81a7c4d77a36239581b398b7c353ed734a53a71693e09736d44bd519ab61f"} Dec 11 19:02:15 crc kubenswrapper[4877]: I1211 19:02:15.215928 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:02:15 crc kubenswrapper[4877]: E1211 19:02:15.216249 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:02:16 crc kubenswrapper[4877]: I1211 19:02:16.637684 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 19:02:16 crc kubenswrapper[4877]: I1211 19:02:16.637920 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 19:02:16 crc kubenswrapper[4877]: I1211 19:02:16.997796 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6drv" event={"ID":"f173c053-534a-48b1-b515-85441479af6c","Type":"ContainerStarted","Data":"f128703a926a1a1e548117321d66e68d9c864dbe8365b163739b7c55931faff9"} Dec 11 19:02:17 crc kubenswrapper[4877]: I1211 19:02:17.052632 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z6drv" podStartSLOduration=2.452499241 podStartE2EDuration="7.052611092s" podCreationTimestamp="2025-12-11 19:02:10 +0000 UTC" firstStartedPulling="2025-12-11 19:02:11.944478841 +0000 UTC m=+3692.970722885" lastFinishedPulling="2025-12-11 19:02:16.544590672 +0000 UTC m=+3697.570834736" observedRunningTime="2025-12-11 19:02:17.05178003 +0000 UTC m=+3698.078024074" watchObservedRunningTime="2025-12-11 19:02:17.052611092 +0000 UTC m=+3698.078855136" Dec 11 19:02:20 crc kubenswrapper[4877]: I1211 19:02:20.596274 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:20 crc kubenswrapper[4877]: I1211 19:02:20.596640 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:20 crc kubenswrapper[4877]: I1211 19:02:20.678114 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:21 crc kubenswrapper[4877]: I1211 19:02:21.087894 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:21 crc kubenswrapper[4877]: I1211 19:02:21.150720 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z6drv"] Dec 11 19:02:23 crc kubenswrapper[4877]: I1211 19:02:23.054032 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z6drv" podUID="f173c053-534a-48b1-b515-85441479af6c" containerName="registry-server" containerID="cri-o://f128703a926a1a1e548117321d66e68d9c864dbe8365b163739b7c55931faff9" gracePeriod=2 Dec 11 19:02:24 crc kubenswrapper[4877]: I1211 19:02:24.080575 4877 generic.go:334] "Generic (PLEG): container finished" podID="f173c053-534a-48b1-b515-85441479af6c" containerID="f128703a926a1a1e548117321d66e68d9c864dbe8365b163739b7c55931faff9" exitCode=0 Dec 11 19:02:24 crc kubenswrapper[4877]: I1211 19:02:24.080945 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6drv" event={"ID":"f173c053-534a-48b1-b515-85441479af6c","Type":"ContainerDied","Data":"f128703a926a1a1e548117321d66e68d9c864dbe8365b163739b7c55931faff9"} Dec 11 19:02:24 crc kubenswrapper[4877]: I1211 19:02:24.165627 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:24 crc kubenswrapper[4877]: I1211 19:02:24.329987 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks5dk\" (UniqueName: \"kubernetes.io/projected/f173c053-534a-48b1-b515-85441479af6c-kube-api-access-ks5dk\") pod \"f173c053-534a-48b1-b515-85441479af6c\" (UID: \"f173c053-534a-48b1-b515-85441479af6c\") " Dec 11 19:02:24 crc kubenswrapper[4877]: I1211 19:02:24.330512 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f173c053-534a-48b1-b515-85441479af6c-utilities\") pod \"f173c053-534a-48b1-b515-85441479af6c\" (UID: \"f173c053-534a-48b1-b515-85441479af6c\") " Dec 11 19:02:24 crc kubenswrapper[4877]: I1211 19:02:24.330789 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f173c053-534a-48b1-b515-85441479af6c-catalog-content\") pod \"f173c053-534a-48b1-b515-85441479af6c\" (UID: \"f173c053-534a-48b1-b515-85441479af6c\") " Dec 11 19:02:24 crc kubenswrapper[4877]: I1211 19:02:24.331538 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f173c053-534a-48b1-b515-85441479af6c-utilities" (OuterVolumeSpecName: "utilities") pod "f173c053-534a-48b1-b515-85441479af6c" (UID: "f173c053-534a-48b1-b515-85441479af6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 19:02:24 crc kubenswrapper[4877]: I1211 19:02:24.339546 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f173c053-534a-48b1-b515-85441479af6c-kube-api-access-ks5dk" (OuterVolumeSpecName: "kube-api-access-ks5dk") pod "f173c053-534a-48b1-b515-85441479af6c" (UID: "f173c053-534a-48b1-b515-85441479af6c"). InnerVolumeSpecName "kube-api-access-ks5dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:02:24 crc kubenswrapper[4877]: I1211 19:02:24.391871 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f173c053-534a-48b1-b515-85441479af6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f173c053-534a-48b1-b515-85441479af6c" (UID: "f173c053-534a-48b1-b515-85441479af6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 19:02:24 crc kubenswrapper[4877]: I1211 19:02:24.433463 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks5dk\" (UniqueName: \"kubernetes.io/projected/f173c053-534a-48b1-b515-85441479af6c-kube-api-access-ks5dk\") on node \"crc\" DevicePath \"\"" Dec 11 19:02:24 crc kubenswrapper[4877]: I1211 19:02:24.433503 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f173c053-534a-48b1-b515-85441479af6c-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 19:02:24 crc kubenswrapper[4877]: I1211 19:02:24.433513 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f173c053-534a-48b1-b515-85441479af6c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 19:02:25 crc kubenswrapper[4877]: I1211 19:02:25.090933 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6drv" event={"ID":"f173c053-534a-48b1-b515-85441479af6c","Type":"ContainerDied","Data":"62bc511336559d5bf3795baa4dfef4cc63880aa69be96d8479ee002eda757ed0"} Dec 11 19:02:25 crc kubenswrapper[4877]: I1211 19:02:25.091396 4877 scope.go:117] "RemoveContainer" containerID="f128703a926a1a1e548117321d66e68d9c864dbe8365b163739b7c55931faff9" Dec 11 19:02:25 crc kubenswrapper[4877]: I1211 19:02:25.090994 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6drv" Dec 11 19:02:25 crc kubenswrapper[4877]: I1211 19:02:25.126488 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z6drv"] Dec 11 19:02:25 crc kubenswrapper[4877]: I1211 19:02:25.126574 4877 scope.go:117] "RemoveContainer" containerID="72c81a7c4d77a36239581b398b7c353ed734a53a71693e09736d44bd519ab61f" Dec 11 19:02:25 crc kubenswrapper[4877]: I1211 19:02:25.137682 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z6drv"] Dec 11 19:02:25 crc kubenswrapper[4877]: I1211 19:02:25.149334 4877 scope.go:117] "RemoveContainer" containerID="ed006d3d62a2a3e956f9c50412c62e2786cafdcd204bf547f9bc723aacb0869c" Dec 11 19:02:25 crc kubenswrapper[4877]: I1211 19:02:25.224958 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f173c053-534a-48b1-b515-85441479af6c" path="/var/lib/kubelet/pods/f173c053-534a-48b1-b515-85441479af6c/volumes" Dec 11 19:02:25 crc kubenswrapper[4877]: I1211 19:02:25.496985 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-fsjfl_1212481e-8248-4bfe-903d-b6b08b87ead6/cert-manager-controller/0.log" Dec 11 19:02:25 crc kubenswrapper[4877]: I1211 19:02:25.647457 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-s4sld_57038dbe-549d-4b29-b24a-4d32261c3a50/cert-manager-cainjector/0.log" Dec 11 19:02:25 crc kubenswrapper[4877]: I1211 19:02:25.681902 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-qzspd_a89aaf35-60f1-481e-9f2f-4bcf0f70cec7/cert-manager-webhook/0.log" Dec 11 19:02:29 crc kubenswrapper[4877]: I1211 19:02:29.226716 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:02:29 crc kubenswrapper[4877]: E1211 19:02:29.227498 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:02:38 crc kubenswrapper[4877]: I1211 19:02:38.565915 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-89bls_1eb218c2-7bcc-411b-90e3-ab813f9739a4/nmstate-console-plugin/0.log" Dec 11 19:02:38 crc kubenswrapper[4877]: I1211 19:02:38.751482 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-229td_a304c336-7461-4570-a515-4ae4c7d2cebd/nmstate-handler/0.log" Dec 11 19:02:38 crc kubenswrapper[4877]: I1211 19:02:38.816312 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-g7dnb_3caa2a2a-a894-44ae-8b4d-8bca5b08d582/kube-rbac-proxy/0.log" Dec 11 19:02:38 crc kubenswrapper[4877]: I1211 19:02:38.827958 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-g7dnb_3caa2a2a-a894-44ae-8b4d-8bca5b08d582/nmstate-metrics/0.log" Dec 11 19:02:38 crc kubenswrapper[4877]: I1211 19:02:38.968736 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-kkxhc_e132a5e4-7ab3-4161-bc40-3d20fc57dab7/nmstate-operator/0.log" Dec 11 19:02:39 crc kubenswrapper[4877]: I1211 19:02:39.005051 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-zxfhk_51a24aa4-4100-4dee-9b55-72a9c14f4859/nmstate-webhook/0.log" Dec 11 19:02:44 crc kubenswrapper[4877]: I1211 19:02:44.216535 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:02:44 crc kubenswrapper[4877]: E1211 19:02:44.217531 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:02:46 crc kubenswrapper[4877]: I1211 19:02:46.637556 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 19:02:46 crc kubenswrapper[4877]: I1211 19:02:46.637927 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.104226 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-jtbmt_786d79d5-52c0-410a-b4e4-b3df71e617ba/kube-rbac-proxy/0.log" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.186961 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-jtbmt_786d79d5-52c0-410a-b4e4-b3df71e617ba/controller/0.log" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.291901 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-frr-files/0.log" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.417202 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-frr-files/0.log" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.454484 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-metrics/0.log" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.459960 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-reloader/0.log" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.556562 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-reloader/0.log" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.693883 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-frr-files/0.log" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.708620 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-reloader/0.log" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.722760 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-metrics/0.log" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.765505 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-metrics/0.log" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.922770 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-reloader/0.log" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.940304 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-metrics/0.log" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.946682 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/controller/0.log" Dec 11 19:02:54 crc kubenswrapper[4877]: I1211 19:02:54.948589 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-frr-files/0.log" Dec 11 19:02:55 crc kubenswrapper[4877]: I1211 19:02:55.108573 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/frr-metrics/0.log" Dec 11 19:02:55 crc kubenswrapper[4877]: I1211 19:02:55.186393 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/kube-rbac-proxy/0.log" Dec 11 19:02:55 crc kubenswrapper[4877]: I1211 19:02:55.202012 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/kube-rbac-proxy-frr/0.log" Dec 11 19:02:55 crc kubenswrapper[4877]: I1211 19:02:55.321027 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/reloader/0.log" Dec 11 19:02:55 crc kubenswrapper[4877]: I1211 19:02:55.459006 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-v5sxh_d3dad0c6-8977-4cff-9866-e95d84ccc658/frr-k8s-webhook-server/0.log" Dec 11 19:02:55 crc kubenswrapper[4877]: I1211 19:02:55.597655 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-554f49ddd5-7c57c_ab8873f5-e97e-483c-a6f4-dad1a15fb382/manager/0.log" Dec 11 19:02:55 crc kubenswrapper[4877]: I1211 19:02:55.817508 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-bf4cbf554-xwtvp_36cfd319-6f46-4547-bc92-6d8f108f556b/webhook-server/0.log" Dec 11 19:02:55 crc kubenswrapper[4877]: I1211 19:02:55.988757 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9hdjf_661affbe-08b4-406d-b4e4-78cbefa4de67/kube-rbac-proxy/0.log" Dec 11 19:02:56 crc kubenswrapper[4877]: I1211 19:02:56.496863 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9hdjf_661affbe-08b4-406d-b4e4-78cbefa4de67/speaker/0.log" Dec 11 19:02:56 crc kubenswrapper[4877]: I1211 19:02:56.498072 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/frr/0.log" Dec 11 19:02:59 crc kubenswrapper[4877]: I1211 19:02:59.221100 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:02:59 crc kubenswrapper[4877]: E1211 19:02:59.223350 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:03:12 crc kubenswrapper[4877]: I1211 19:03:12.133938 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx_cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9/util/0.log" Dec 11 19:03:12 crc kubenswrapper[4877]: I1211 19:03:12.217687 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:03:12 crc kubenswrapper[4877]: E1211 19:03:12.217912 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:03:12 crc kubenswrapper[4877]: I1211 19:03:12.288257 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx_cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9/util/0.log" Dec 11 19:03:12 crc kubenswrapper[4877]: I1211 19:03:12.300330 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx_cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9/pull/0.log" Dec 11 19:03:12 crc kubenswrapper[4877]: I1211 19:03:12.367305 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx_cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9/pull/0.log" Dec 11 19:03:12 crc kubenswrapper[4877]: I1211 19:03:12.509360 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx_cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9/util/0.log" Dec 11 19:03:12 crc kubenswrapper[4877]: I1211 19:03:12.510573 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx_cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9/pull/0.log" Dec 11 19:03:12 crc kubenswrapper[4877]: I1211 19:03:12.547141 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx_cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9/extract/0.log" Dec 11 19:03:12 crc kubenswrapper[4877]: I1211 19:03:12.697481 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5_a3671635-4e9b-4c74-85a5-98480f49249a/util/0.log" Dec 11 19:03:12 crc kubenswrapper[4877]: I1211 19:03:12.820467 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5_a3671635-4e9b-4c74-85a5-98480f49249a/pull/0.log" Dec 11 19:03:12 crc kubenswrapper[4877]: I1211 19:03:12.845228 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5_a3671635-4e9b-4c74-85a5-98480f49249a/util/0.log" Dec 11 19:03:12 crc kubenswrapper[4877]: I1211 19:03:12.877750 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5_a3671635-4e9b-4c74-85a5-98480f49249a/pull/0.log" Dec 11 19:03:13 crc kubenswrapper[4877]: I1211 19:03:13.021557 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5_a3671635-4e9b-4c74-85a5-98480f49249a/pull/0.log" Dec 11 19:03:13 crc kubenswrapper[4877]: I1211 19:03:13.046876 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5_a3671635-4e9b-4c74-85a5-98480f49249a/util/0.log" Dec 11 19:03:13 crc kubenswrapper[4877]: I1211 19:03:13.084366 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5_a3671635-4e9b-4c74-85a5-98480f49249a/extract/0.log" Dec 11 19:03:13 crc kubenswrapper[4877]: I1211 19:03:13.196300 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-df92c_00bd5c61-a49a-4020-8e9a-fa130c65c7e2/extract-utilities/0.log" Dec 11 19:03:13 crc kubenswrapper[4877]: I1211 19:03:13.367102 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-df92c_00bd5c61-a49a-4020-8e9a-fa130c65c7e2/extract-utilities/0.log" Dec 11 19:03:13 crc kubenswrapper[4877]: I1211 19:03:13.380667 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-df92c_00bd5c61-a49a-4020-8e9a-fa130c65c7e2/extract-content/0.log" Dec 11 19:03:13 crc kubenswrapper[4877]: I1211 19:03:13.432332 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-df92c_00bd5c61-a49a-4020-8e9a-fa130c65c7e2/extract-content/0.log" Dec 11 19:03:13 crc kubenswrapper[4877]: I1211 19:03:13.720282 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-df92c_00bd5c61-a49a-4020-8e9a-fa130c65c7e2/extract-content/0.log" Dec 11 19:03:13 crc kubenswrapper[4877]: I1211 19:03:13.803329 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-df92c_00bd5c61-a49a-4020-8e9a-fa130c65c7e2/extract-utilities/0.log" Dec 11 19:03:13 crc kubenswrapper[4877]: I1211 19:03:13.941115 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5924z_d0250f8a-6910-4bd8-a583-f772807319f1/extract-utilities/0.log" Dec 11 19:03:14 crc kubenswrapper[4877]: I1211 19:03:14.020564 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-df92c_00bd5c61-a49a-4020-8e9a-fa130c65c7e2/registry-server/0.log" Dec 11 19:03:14 crc kubenswrapper[4877]: I1211 19:03:14.123645 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5924z_d0250f8a-6910-4bd8-a583-f772807319f1/extract-content/0.log" Dec 11 19:03:14 crc kubenswrapper[4877]: I1211 19:03:14.130407 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5924z_d0250f8a-6910-4bd8-a583-f772807319f1/extract-content/0.log" Dec 11 19:03:14 crc kubenswrapper[4877]: I1211 19:03:14.152541 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5924z_d0250f8a-6910-4bd8-a583-f772807319f1/extract-utilities/0.log" Dec 11 19:03:14 crc kubenswrapper[4877]: I1211 19:03:14.304948 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5924z_d0250f8a-6910-4bd8-a583-f772807319f1/extract-content/0.log" Dec 11 19:03:14 crc kubenswrapper[4877]: I1211 19:03:14.341399 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5924z_d0250f8a-6910-4bd8-a583-f772807319f1/extract-utilities/0.log" Dec 11 19:03:14 crc kubenswrapper[4877]: I1211 19:03:14.521960 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4qvdz_1eaf037c-b9a9-4c1b-b108-0ffcad610322/marketplace-operator/2.log" Dec 11 19:03:14 crc kubenswrapper[4877]: I1211 19:03:14.558708 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4qvdz_1eaf037c-b9a9-4c1b-b108-0ffcad610322/marketplace-operator/1.log" Dec 11 19:03:14 crc kubenswrapper[4877]: I1211 19:03:14.841246 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbv6k_c1003f02-48c3-4729-8720-0e23ffb4b8dd/extract-utilities/0.log" Dec 11 19:03:14 crc kubenswrapper[4877]: I1211 19:03:14.914057 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5924z_d0250f8a-6910-4bd8-a583-f772807319f1/registry-server/0.log" Dec 11 19:03:14 crc kubenswrapper[4877]: I1211 19:03:14.930035 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbv6k_c1003f02-48c3-4729-8720-0e23ffb4b8dd/extract-utilities/0.log" Dec 11 19:03:14 crc kubenswrapper[4877]: I1211 19:03:14.974768 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbv6k_c1003f02-48c3-4729-8720-0e23ffb4b8dd/extract-content/0.log" Dec 11 19:03:14 crc kubenswrapper[4877]: I1211 19:03:14.983492 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbv6k_c1003f02-48c3-4729-8720-0e23ffb4b8dd/extract-content/0.log" Dec 11 19:03:15 crc kubenswrapper[4877]: I1211 19:03:15.135660 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbv6k_c1003f02-48c3-4729-8720-0e23ffb4b8dd/extract-content/0.log" Dec 11 19:03:15 crc kubenswrapper[4877]: I1211 19:03:15.143526 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbv6k_c1003f02-48c3-4729-8720-0e23ffb4b8dd/extract-utilities/0.log" Dec 11 19:03:15 crc kubenswrapper[4877]: I1211 19:03:15.233902 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2k25c_fbba94b5-69fd-4956-b1f7-8e397630c287/extract-utilities/0.log" Dec 11 19:03:15 crc kubenswrapper[4877]: I1211 19:03:15.288754 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbv6k_c1003f02-48c3-4729-8720-0e23ffb4b8dd/registry-server/0.log" Dec 11 19:03:15 crc kubenswrapper[4877]: I1211 19:03:15.411325 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2k25c_fbba94b5-69fd-4956-b1f7-8e397630c287/extract-utilities/0.log" Dec 11 19:03:15 crc kubenswrapper[4877]: I1211 19:03:15.420282 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2k25c_fbba94b5-69fd-4956-b1f7-8e397630c287/extract-content/0.log" Dec 11 19:03:15 crc kubenswrapper[4877]: I1211 19:03:15.424628 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2k25c_fbba94b5-69fd-4956-b1f7-8e397630c287/extract-content/0.log" Dec 11 19:03:15 crc kubenswrapper[4877]: I1211 19:03:15.583185 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2k25c_fbba94b5-69fd-4956-b1f7-8e397630c287/extract-content/0.log" Dec 11 19:03:15 crc kubenswrapper[4877]: I1211 19:03:15.607983 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2k25c_fbba94b5-69fd-4956-b1f7-8e397630c287/extract-utilities/0.log" Dec 11 19:03:15 crc kubenswrapper[4877]: I1211 19:03:15.755325 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2k25c_fbba94b5-69fd-4956-b1f7-8e397630c287/registry-server/0.log" Dec 11 19:03:16 crc kubenswrapper[4877]: I1211 19:03:16.638129 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 19:03:16 crc kubenswrapper[4877]: I1211 19:03:16.638197 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 19:03:16 crc kubenswrapper[4877]: I1211 19:03:16.638244 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 19:03:16 crc kubenswrapper[4877]: I1211 19:03:16.639103 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 19:03:16 crc kubenswrapper[4877]: I1211 19:03:16.639169 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" gracePeriod=600 Dec 11 19:03:16 crc kubenswrapper[4877]: E1211 19:03:16.768312 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:03:17 crc kubenswrapper[4877]: I1211 19:03:17.584326 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" exitCode=0 Dec 11 19:03:17 crc kubenswrapper[4877]: I1211 19:03:17.584394 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41"} Dec 11 19:03:17 crc kubenswrapper[4877]: I1211 19:03:17.584441 4877 scope.go:117] "RemoveContainer" containerID="9ad5ad385396913aff957a066ae764bcb4e68b5835ff68bcce7357db0aa93e92" Dec 11 19:03:17 crc kubenswrapper[4877]: I1211 19:03:17.584988 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:03:17 crc kubenswrapper[4877]: E1211 19:03:17.585294 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:03:23 crc kubenswrapper[4877]: I1211 19:03:23.215626 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:03:23 crc kubenswrapper[4877]: E1211 19:03:23.216680 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:03:32 crc kubenswrapper[4877]: I1211 19:03:32.215896 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:03:32 crc kubenswrapper[4877]: E1211 19:03:32.218945 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:03:36 crc kubenswrapper[4877]: I1211 19:03:36.215634 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:03:36 crc kubenswrapper[4877]: E1211 19:03:36.216367 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:03:43 crc kubenswrapper[4877]: I1211 19:03:43.215859 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:03:43 crc kubenswrapper[4877]: E1211 19:03:43.216825 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:03:47 crc kubenswrapper[4877]: I1211 19:03:47.216676 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:03:47 crc kubenswrapper[4877]: E1211 19:03:47.217772 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:03:54 crc kubenswrapper[4877]: I1211 19:03:54.216292 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:03:54 crc kubenswrapper[4877]: E1211 19:03:54.217527 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:04:02 crc kubenswrapper[4877]: I1211 19:04:02.216178 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:04:02 crc kubenswrapper[4877]: E1211 19:04:02.217353 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:04:08 crc kubenswrapper[4877]: I1211 19:04:08.215801 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:04:08 crc kubenswrapper[4877]: E1211 19:04:08.216779 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:04:15 crc kubenswrapper[4877]: I1211 19:04:15.215613 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:04:15 crc kubenswrapper[4877]: E1211 19:04:15.217421 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:04:23 crc kubenswrapper[4877]: I1211 19:04:23.216224 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:04:23 crc kubenswrapper[4877]: E1211 19:04:23.217114 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:04:26 crc kubenswrapper[4877]: I1211 19:04:26.216055 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:04:26 crc kubenswrapper[4877]: E1211 19:04:26.216777 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:04:35 crc kubenswrapper[4877]: I1211 19:04:35.215327 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:04:35 crc kubenswrapper[4877]: E1211 19:04:35.216250 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:04:38 crc kubenswrapper[4877]: I1211 19:04:38.215645 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:04:38 crc kubenswrapper[4877]: E1211 19:04:38.217362 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:04:46 crc kubenswrapper[4877]: I1211 19:04:46.215711 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:04:46 crc kubenswrapper[4877]: E1211 19:04:46.217051 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:04:53 crc kubenswrapper[4877]: I1211 19:04:53.216427 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:04:53 crc kubenswrapper[4877]: E1211 19:04:53.217506 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:04:54 crc kubenswrapper[4877]: I1211 19:04:54.436225 4877 generic.go:334] "Generic (PLEG): container finished" podID="a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693" containerID="4c4c5e8f28edd1290f10d0d09fa142106e844fe14c26373ceb05b245374bd852" exitCode=0 Dec 11 19:04:54 crc kubenswrapper[4877]: I1211 19:04:54.436337 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ssbsl/must-gather-jm5tr" event={"ID":"a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693","Type":"ContainerDied","Data":"4c4c5e8f28edd1290f10d0d09fa142106e844fe14c26373ceb05b245374bd852"} Dec 11 19:04:54 crc kubenswrapper[4877]: I1211 19:04:54.437765 4877 scope.go:117] "RemoveContainer" containerID="4c4c5e8f28edd1290f10d0d09fa142106e844fe14c26373ceb05b245374bd852" Dec 11 19:04:54 crc kubenswrapper[4877]: I1211 19:04:54.904180 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ssbsl_must-gather-jm5tr_a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693/gather/0.log" Dec 11 19:05:01 crc kubenswrapper[4877]: I1211 19:05:01.215820 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:05:01 crc kubenswrapper[4877]: E1211 19:05:01.216903 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:05:03 crc kubenswrapper[4877]: I1211 19:05:03.380143 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ssbsl/must-gather-jm5tr"] Dec 11 19:05:03 crc kubenswrapper[4877]: I1211 19:05:03.381211 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ssbsl/must-gather-jm5tr" podUID="a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693" containerName="copy" containerID="cri-o://a13690f25e3126b418025215bf4221a250bc3a1bbda27075b56c16a4aa1b919e" gracePeriod=2 Dec 11 19:05:03 crc kubenswrapper[4877]: I1211 19:05:03.388634 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ssbsl/must-gather-jm5tr"] Dec 11 19:05:03 crc kubenswrapper[4877]: I1211 19:05:03.529123 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ssbsl_must-gather-jm5tr_a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693/copy/0.log" Dec 11 19:05:03 crc kubenswrapper[4877]: I1211 19:05:03.529612 4877 generic.go:334] "Generic (PLEG): container finished" podID="a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693" containerID="a13690f25e3126b418025215bf4221a250bc3a1bbda27075b56c16a4aa1b919e" exitCode=143 Dec 11 19:05:03 crc kubenswrapper[4877]: I1211 19:05:03.829554 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ssbsl_must-gather-jm5tr_a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693/copy/0.log" Dec 11 19:05:03 crc kubenswrapper[4877]: I1211 19:05:03.830177 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/must-gather-jm5tr" Dec 11 19:05:04 crc kubenswrapper[4877]: I1211 19:05:04.030207 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693-must-gather-output\") pod \"a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693\" (UID: \"a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693\") " Dec 11 19:05:04 crc kubenswrapper[4877]: I1211 19:05:04.030447 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxdqk\" (UniqueName: \"kubernetes.io/projected/a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693-kube-api-access-sxdqk\") pod \"a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693\" (UID: \"a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693\") " Dec 11 19:05:04 crc kubenswrapper[4877]: I1211 19:05:04.040394 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693-kube-api-access-sxdqk" (OuterVolumeSpecName: "kube-api-access-sxdqk") pod "a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693" (UID: "a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693"). InnerVolumeSpecName "kube-api-access-sxdqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:05:04 crc kubenswrapper[4877]: I1211 19:05:04.133080 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxdqk\" (UniqueName: \"kubernetes.io/projected/a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693-kube-api-access-sxdqk\") on node \"crc\" DevicePath \"\"" Dec 11 19:05:04 crc kubenswrapper[4877]: I1211 19:05:04.183692 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693" (UID: "a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 19:05:04 crc kubenswrapper[4877]: I1211 19:05:04.234903 4877 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 11 19:05:04 crc kubenswrapper[4877]: I1211 19:05:04.540199 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ssbsl_must-gather-jm5tr_a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693/copy/0.log" Dec 11 19:05:04 crc kubenswrapper[4877]: I1211 19:05:04.540763 4877 scope.go:117] "RemoveContainer" containerID="a13690f25e3126b418025215bf4221a250bc3a1bbda27075b56c16a4aa1b919e" Dec 11 19:05:04 crc kubenswrapper[4877]: I1211 19:05:04.540815 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ssbsl/must-gather-jm5tr" Dec 11 19:05:04 crc kubenswrapper[4877]: I1211 19:05:04.610588 4877 scope.go:117] "RemoveContainer" containerID="4c4c5e8f28edd1290f10d0d09fa142106e844fe14c26373ceb05b245374bd852" Dec 11 19:05:05 crc kubenswrapper[4877]: I1211 19:05:05.215721 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:05:05 crc kubenswrapper[4877]: E1211 19:05:05.216017 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:05:05 crc kubenswrapper[4877]: I1211 19:05:05.226865 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693" path="/var/lib/kubelet/pods/a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693/volumes" Dec 11 19:05:14 crc kubenswrapper[4877]: I1211 19:05:14.216007 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:05:14 crc kubenswrapper[4877]: E1211 19:05:14.217419 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:05:17 crc kubenswrapper[4877]: I1211 19:05:17.216280 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:05:17 crc kubenswrapper[4877]: E1211 19:05:17.217165 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:05:25 crc kubenswrapper[4877]: I1211 19:05:25.217695 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:05:25 crc kubenswrapper[4877]: E1211 19:05:25.222067 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:05:31 crc kubenswrapper[4877]: I1211 19:05:31.216984 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:05:31 crc kubenswrapper[4877]: E1211 19:05:31.218168 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:05:39 crc kubenswrapper[4877]: I1211 19:05:39.231220 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:05:39 crc kubenswrapper[4877]: E1211 19:05:39.232510 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:05:46 crc kubenswrapper[4877]: I1211 19:05:46.215707 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:05:46 crc kubenswrapper[4877]: E1211 19:05:46.216892 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:05:50 crc kubenswrapper[4877]: I1211 19:05:50.215495 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:05:50 crc kubenswrapper[4877]: E1211 19:05:50.216589 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:06:00 crc kubenswrapper[4877]: I1211 19:06:00.216181 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:06:00 crc kubenswrapper[4877]: E1211 19:06:00.217290 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:06:04 crc kubenswrapper[4877]: I1211 19:06:04.215885 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:06:04 crc kubenswrapper[4877]: E1211 19:06:04.216989 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:06:11 crc kubenswrapper[4877]: I1211 19:06:11.215861 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:06:11 crc kubenswrapper[4877]: E1211 19:06:11.216916 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:06:15 crc kubenswrapper[4877]: I1211 19:06:15.215758 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:06:15 crc kubenswrapper[4877]: E1211 19:06:15.217685 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:06:23 crc kubenswrapper[4877]: I1211 19:06:23.215588 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:06:23 crc kubenswrapper[4877]: E1211 19:06:23.216362 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:06:28 crc kubenswrapper[4877]: I1211 19:06:28.215411 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:06:28 crc kubenswrapper[4877]: E1211 19:06:28.216119 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:06:36 crc kubenswrapper[4877]: I1211 19:06:36.215983 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:06:36 crc kubenswrapper[4877]: E1211 19:06:36.216665 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:06:43 crc kubenswrapper[4877]: I1211 19:06:43.216227 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:06:43 crc kubenswrapper[4877]: E1211 19:06:43.217326 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:06:47 crc kubenswrapper[4877]: I1211 19:06:47.471920 4877 scope.go:117] "RemoveContainer" containerID="dd6187e6c4dc0172e07cb359f8e77e92ce24627996931cf3fb47a0d05188b107" Dec 11 19:06:51 crc kubenswrapper[4877]: I1211 19:06:51.216153 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:06:52 crc kubenswrapper[4877]: I1211 19:06:52.010446 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerStarted","Data":"bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de"} Dec 11 19:06:52 crc kubenswrapper[4877]: I1211 19:06:52.011249 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 19:06:54 crc kubenswrapper[4877]: I1211 19:06:54.216877 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:06:54 crc kubenswrapper[4877]: E1211 19:06:54.217461 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:07:01 crc kubenswrapper[4877]: I1211 19:07:01.153028 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 19:07:09 crc kubenswrapper[4877]: I1211 19:07:09.223704 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:07:09 crc kubenswrapper[4877]: E1211 19:07:09.224695 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:07:20 crc kubenswrapper[4877]: I1211 19:07:20.215640 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:07:20 crc kubenswrapper[4877]: E1211 19:07:20.216515 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:07:34 crc kubenswrapper[4877]: I1211 19:07:34.215334 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:07:34 crc kubenswrapper[4877]: E1211 19:07:34.216418 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.398189 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dkpg7/must-gather-bhfm9"] Dec 11 19:07:44 crc kubenswrapper[4877]: E1211 19:07:44.399096 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f173c053-534a-48b1-b515-85441479af6c" containerName="extract-content" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.399108 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f173c053-534a-48b1-b515-85441479af6c" containerName="extract-content" Dec 11 19:07:44 crc kubenswrapper[4877]: E1211 19:07:44.399135 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f173c053-534a-48b1-b515-85441479af6c" containerName="extract-utilities" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.399141 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f173c053-534a-48b1-b515-85441479af6c" containerName="extract-utilities" Dec 11 19:07:44 crc kubenswrapper[4877]: E1211 19:07:44.399148 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693" containerName="copy" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.399154 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693" containerName="copy" Dec 11 19:07:44 crc kubenswrapper[4877]: E1211 19:07:44.399167 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693" containerName="gather" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.399172 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693" containerName="gather" Dec 11 19:07:44 crc kubenswrapper[4877]: E1211 19:07:44.399188 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f173c053-534a-48b1-b515-85441479af6c" containerName="registry-server" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.399193 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="f173c053-534a-48b1-b515-85441479af6c" containerName="registry-server" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.399358 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="f173c053-534a-48b1-b515-85441479af6c" containerName="registry-server" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.399387 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693" containerName="copy" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.399399 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ba0c98-95b6-4e1a-8fac-d5c2ea26e693" containerName="gather" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.400337 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/must-gather-bhfm9" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.405776 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dkpg7"/"kube-root-ca.crt" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.405997 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dkpg7"/"openshift-service-ca.crt" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.426936 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dkpg7/must-gather-bhfm9"] Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.508172 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2eddb64a-1239-4517-816d-1090f5a55755-must-gather-output\") pod \"must-gather-bhfm9\" (UID: \"2eddb64a-1239-4517-816d-1090f5a55755\") " pod="openshift-must-gather-dkpg7/must-gather-bhfm9" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.508361 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9b6q\" (UniqueName: \"kubernetes.io/projected/2eddb64a-1239-4517-816d-1090f5a55755-kube-api-access-t9b6q\") pod \"must-gather-bhfm9\" (UID: \"2eddb64a-1239-4517-816d-1090f5a55755\") " pod="openshift-must-gather-dkpg7/must-gather-bhfm9" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.609813 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9b6q\" (UniqueName: \"kubernetes.io/projected/2eddb64a-1239-4517-816d-1090f5a55755-kube-api-access-t9b6q\") pod \"must-gather-bhfm9\" (UID: \"2eddb64a-1239-4517-816d-1090f5a55755\") " pod="openshift-must-gather-dkpg7/must-gather-bhfm9" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.609929 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2eddb64a-1239-4517-816d-1090f5a55755-must-gather-output\") pod \"must-gather-bhfm9\" (UID: \"2eddb64a-1239-4517-816d-1090f5a55755\") " pod="openshift-must-gather-dkpg7/must-gather-bhfm9" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.610290 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2eddb64a-1239-4517-816d-1090f5a55755-must-gather-output\") pod \"must-gather-bhfm9\" (UID: \"2eddb64a-1239-4517-816d-1090f5a55755\") " pod="openshift-must-gather-dkpg7/must-gather-bhfm9" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.642804 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9b6q\" (UniqueName: \"kubernetes.io/projected/2eddb64a-1239-4517-816d-1090f5a55755-kube-api-access-t9b6q\") pod \"must-gather-bhfm9\" (UID: \"2eddb64a-1239-4517-816d-1090f5a55755\") " pod="openshift-must-gather-dkpg7/must-gather-bhfm9" Dec 11 19:07:44 crc kubenswrapper[4877]: I1211 19:07:44.732220 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/must-gather-bhfm9" Dec 11 19:07:45 crc kubenswrapper[4877]: I1211 19:07:45.191637 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dkpg7/must-gather-bhfm9"] Dec 11 19:07:45 crc kubenswrapper[4877]: I1211 19:07:45.560610 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dkpg7/must-gather-bhfm9" event={"ID":"2eddb64a-1239-4517-816d-1090f5a55755","Type":"ContainerStarted","Data":"738a844403ccb011095a19bcbafa4e60dc410812f4c0049381dd32283ac0a076"} Dec 11 19:07:45 crc kubenswrapper[4877]: I1211 19:07:45.562081 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dkpg7/must-gather-bhfm9" event={"ID":"2eddb64a-1239-4517-816d-1090f5a55755","Type":"ContainerStarted","Data":"419036c2a869d6459616d2b62c2e3d50c8bb2d7984cc136d454788b1c443d9ac"} Dec 11 19:07:46 crc kubenswrapper[4877]: I1211 19:07:46.572982 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dkpg7/must-gather-bhfm9" event={"ID":"2eddb64a-1239-4517-816d-1090f5a55755","Type":"ContainerStarted","Data":"423dc9db7ee3868a39f7fcaee5eb1f474edac46e9adf0c3c802357a141b565dd"} Dec 11 19:07:46 crc kubenswrapper[4877]: I1211 19:07:46.593668 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dkpg7/must-gather-bhfm9" podStartSLOduration=2.593647279 podStartE2EDuration="2.593647279s" podCreationTimestamp="2025-12-11 19:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 19:07:46.586256552 +0000 UTC m=+4027.612500596" watchObservedRunningTime="2025-12-11 19:07:46.593647279 +0000 UTC m=+4027.619891323" Dec 11 19:07:49 crc kubenswrapper[4877]: I1211 19:07:49.214403 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dkpg7/crc-debug-82j97"] Dec 11 19:07:49 crc kubenswrapper[4877]: I1211 19:07:49.220764 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/crc-debug-82j97" Dec 11 19:07:49 crc kubenswrapper[4877]: I1211 19:07:49.221774 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:07:49 crc kubenswrapper[4877]: E1211 19:07:49.222066 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:07:49 crc kubenswrapper[4877]: I1211 19:07:49.225107 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dkpg7"/"default-dockercfg-8lnxt" Dec 11 19:07:49 crc kubenswrapper[4877]: I1211 19:07:49.326935 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp2m2\" (UniqueName: \"kubernetes.io/projected/07a5924e-774a-42d7-afa8-0106f0d5350c-kube-api-access-sp2m2\") pod \"crc-debug-82j97\" (UID: \"07a5924e-774a-42d7-afa8-0106f0d5350c\") " pod="openshift-must-gather-dkpg7/crc-debug-82j97" Dec 11 19:07:49 crc kubenswrapper[4877]: I1211 19:07:49.327366 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07a5924e-774a-42d7-afa8-0106f0d5350c-host\") pod \"crc-debug-82j97\" (UID: \"07a5924e-774a-42d7-afa8-0106f0d5350c\") " pod="openshift-must-gather-dkpg7/crc-debug-82j97" Dec 11 19:07:49 crc kubenswrapper[4877]: I1211 19:07:49.429157 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp2m2\" (UniqueName: \"kubernetes.io/projected/07a5924e-774a-42d7-afa8-0106f0d5350c-kube-api-access-sp2m2\") pod \"crc-debug-82j97\" (UID: \"07a5924e-774a-42d7-afa8-0106f0d5350c\") " pod="openshift-must-gather-dkpg7/crc-debug-82j97" Dec 11 19:07:49 crc kubenswrapper[4877]: I1211 19:07:49.429246 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07a5924e-774a-42d7-afa8-0106f0d5350c-host\") pod \"crc-debug-82j97\" (UID: \"07a5924e-774a-42d7-afa8-0106f0d5350c\") " pod="openshift-must-gather-dkpg7/crc-debug-82j97" Dec 11 19:07:49 crc kubenswrapper[4877]: I1211 19:07:49.429382 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07a5924e-774a-42d7-afa8-0106f0d5350c-host\") pod \"crc-debug-82j97\" (UID: \"07a5924e-774a-42d7-afa8-0106f0d5350c\") " pod="openshift-must-gather-dkpg7/crc-debug-82j97" Dec 11 19:07:49 crc kubenswrapper[4877]: I1211 19:07:49.448429 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp2m2\" (UniqueName: \"kubernetes.io/projected/07a5924e-774a-42d7-afa8-0106f0d5350c-kube-api-access-sp2m2\") pod \"crc-debug-82j97\" (UID: \"07a5924e-774a-42d7-afa8-0106f0d5350c\") " pod="openshift-must-gather-dkpg7/crc-debug-82j97" Dec 11 19:07:49 crc kubenswrapper[4877]: I1211 19:07:49.545140 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/crc-debug-82j97" Dec 11 19:07:49 crc kubenswrapper[4877]: W1211 19:07:49.572756 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07a5924e_774a_42d7_afa8_0106f0d5350c.slice/crio-b2965dd838543e7705b778c5249b224e2ef60b8322e1ba3ada4911ef326e012b WatchSource:0}: Error finding container b2965dd838543e7705b778c5249b224e2ef60b8322e1ba3ada4911ef326e012b: Status 404 returned error can't find the container with id b2965dd838543e7705b778c5249b224e2ef60b8322e1ba3ada4911ef326e012b Dec 11 19:07:49 crc kubenswrapper[4877]: I1211 19:07:49.607268 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dkpg7/crc-debug-82j97" event={"ID":"07a5924e-774a-42d7-afa8-0106f0d5350c","Type":"ContainerStarted","Data":"b2965dd838543e7705b778c5249b224e2ef60b8322e1ba3ada4911ef326e012b"} Dec 11 19:07:50 crc kubenswrapper[4877]: I1211 19:07:50.616263 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dkpg7/crc-debug-82j97" event={"ID":"07a5924e-774a-42d7-afa8-0106f0d5350c","Type":"ContainerStarted","Data":"4ea2d095a25bb46f75525373373a913bd19dfc22511f17e367463218c4ea35bf"} Dec 11 19:07:50 crc kubenswrapper[4877]: I1211 19:07:50.637244 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dkpg7/crc-debug-82j97" podStartSLOduration=1.637225041 podStartE2EDuration="1.637225041s" podCreationTimestamp="2025-12-11 19:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 19:07:50.629468645 +0000 UTC m=+4031.655712689" watchObservedRunningTime="2025-12-11 19:07:50.637225041 +0000 UTC m=+4031.663469085" Dec 11 19:08:01 crc kubenswrapper[4877]: I1211 19:08:01.215659 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:08:01 crc kubenswrapper[4877]: E1211 19:08:01.216271 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:08:14 crc kubenswrapper[4877]: I1211 19:08:14.215561 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:08:14 crc kubenswrapper[4877]: E1211 19:08:14.217161 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:08:23 crc kubenswrapper[4877]: I1211 19:08:23.956025 4877 generic.go:334] "Generic (PLEG): container finished" podID="07a5924e-774a-42d7-afa8-0106f0d5350c" containerID="4ea2d095a25bb46f75525373373a913bd19dfc22511f17e367463218c4ea35bf" exitCode=0 Dec 11 19:08:23 crc kubenswrapper[4877]: I1211 19:08:23.956325 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dkpg7/crc-debug-82j97" event={"ID":"07a5924e-774a-42d7-afa8-0106f0d5350c","Type":"ContainerDied","Data":"4ea2d095a25bb46f75525373373a913bd19dfc22511f17e367463218c4ea35bf"} Dec 11 19:08:25 crc kubenswrapper[4877]: I1211 19:08:25.070492 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/crc-debug-82j97" Dec 11 19:08:25 crc kubenswrapper[4877]: I1211 19:08:25.102496 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dkpg7/crc-debug-82j97"] Dec 11 19:08:25 crc kubenswrapper[4877]: I1211 19:08:25.111742 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dkpg7/crc-debug-82j97"] Dec 11 19:08:25 crc kubenswrapper[4877]: I1211 19:08:25.233125 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp2m2\" (UniqueName: \"kubernetes.io/projected/07a5924e-774a-42d7-afa8-0106f0d5350c-kube-api-access-sp2m2\") pod \"07a5924e-774a-42d7-afa8-0106f0d5350c\" (UID: \"07a5924e-774a-42d7-afa8-0106f0d5350c\") " Dec 11 19:08:25 crc kubenswrapper[4877]: I1211 19:08:25.233379 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07a5924e-774a-42d7-afa8-0106f0d5350c-host\") pod \"07a5924e-774a-42d7-afa8-0106f0d5350c\" (UID: \"07a5924e-774a-42d7-afa8-0106f0d5350c\") " Dec 11 19:08:25 crc kubenswrapper[4877]: I1211 19:08:25.234894 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07a5924e-774a-42d7-afa8-0106f0d5350c-host" (OuterVolumeSpecName: "host") pod "07a5924e-774a-42d7-afa8-0106f0d5350c" (UID: "07a5924e-774a-42d7-afa8-0106f0d5350c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 19:08:25 crc kubenswrapper[4877]: I1211 19:08:25.245599 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a5924e-774a-42d7-afa8-0106f0d5350c-kube-api-access-sp2m2" (OuterVolumeSpecName: "kube-api-access-sp2m2") pod "07a5924e-774a-42d7-afa8-0106f0d5350c" (UID: "07a5924e-774a-42d7-afa8-0106f0d5350c"). InnerVolumeSpecName "kube-api-access-sp2m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:08:25 crc kubenswrapper[4877]: I1211 19:08:25.247074 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07a5924e-774a-42d7-afa8-0106f0d5350c" path="/var/lib/kubelet/pods/07a5924e-774a-42d7-afa8-0106f0d5350c/volumes" Dec 11 19:08:25 crc kubenswrapper[4877]: I1211 19:08:25.336396 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp2m2\" (UniqueName: \"kubernetes.io/projected/07a5924e-774a-42d7-afa8-0106f0d5350c-kube-api-access-sp2m2\") on node \"crc\" DevicePath \"\"" Dec 11 19:08:25 crc kubenswrapper[4877]: I1211 19:08:25.336432 4877 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/07a5924e-774a-42d7-afa8-0106f0d5350c-host\") on node \"crc\" DevicePath \"\"" Dec 11 19:08:25 crc kubenswrapper[4877]: I1211 19:08:25.987697 4877 scope.go:117] "RemoveContainer" containerID="4ea2d095a25bb46f75525373373a913bd19dfc22511f17e367463218c4ea35bf" Dec 11 19:08:25 crc kubenswrapper[4877]: I1211 19:08:25.988071 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/crc-debug-82j97" Dec 11 19:08:26 crc kubenswrapper[4877]: I1211 19:08:26.321308 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dkpg7/crc-debug-krwwv"] Dec 11 19:08:26 crc kubenswrapper[4877]: E1211 19:08:26.321786 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a5924e-774a-42d7-afa8-0106f0d5350c" containerName="container-00" Dec 11 19:08:26 crc kubenswrapper[4877]: I1211 19:08:26.321799 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a5924e-774a-42d7-afa8-0106f0d5350c" containerName="container-00" Dec 11 19:08:26 crc kubenswrapper[4877]: I1211 19:08:26.322018 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a5924e-774a-42d7-afa8-0106f0d5350c" containerName="container-00" Dec 11 19:08:26 crc kubenswrapper[4877]: I1211 19:08:26.322604 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/crc-debug-krwwv" Dec 11 19:08:26 crc kubenswrapper[4877]: I1211 19:08:26.324539 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dkpg7"/"default-dockercfg-8lnxt" Dec 11 19:08:26 crc kubenswrapper[4877]: I1211 19:08:26.456484 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d214be3-7960-42ea-9a92-9b48b84fee9c-host\") pod \"crc-debug-krwwv\" (UID: \"7d214be3-7960-42ea-9a92-9b48b84fee9c\") " pod="openshift-must-gather-dkpg7/crc-debug-krwwv" Dec 11 19:08:26 crc kubenswrapper[4877]: I1211 19:08:26.456683 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct77b\" (UniqueName: \"kubernetes.io/projected/7d214be3-7960-42ea-9a92-9b48b84fee9c-kube-api-access-ct77b\") pod \"crc-debug-krwwv\" (UID: \"7d214be3-7960-42ea-9a92-9b48b84fee9c\") " pod="openshift-must-gather-dkpg7/crc-debug-krwwv" Dec 11 19:08:26 crc kubenswrapper[4877]: I1211 19:08:26.558780 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d214be3-7960-42ea-9a92-9b48b84fee9c-host\") pod \"crc-debug-krwwv\" (UID: \"7d214be3-7960-42ea-9a92-9b48b84fee9c\") " pod="openshift-must-gather-dkpg7/crc-debug-krwwv" Dec 11 19:08:26 crc kubenswrapper[4877]: I1211 19:08:26.558944 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d214be3-7960-42ea-9a92-9b48b84fee9c-host\") pod \"crc-debug-krwwv\" (UID: \"7d214be3-7960-42ea-9a92-9b48b84fee9c\") " pod="openshift-must-gather-dkpg7/crc-debug-krwwv" Dec 11 19:08:26 crc kubenswrapper[4877]: I1211 19:08:26.559211 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct77b\" (UniqueName: \"kubernetes.io/projected/7d214be3-7960-42ea-9a92-9b48b84fee9c-kube-api-access-ct77b\") pod \"crc-debug-krwwv\" (UID: \"7d214be3-7960-42ea-9a92-9b48b84fee9c\") " pod="openshift-must-gather-dkpg7/crc-debug-krwwv" Dec 11 19:08:26 crc kubenswrapper[4877]: I1211 19:08:26.583941 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct77b\" (UniqueName: \"kubernetes.io/projected/7d214be3-7960-42ea-9a92-9b48b84fee9c-kube-api-access-ct77b\") pod \"crc-debug-krwwv\" (UID: \"7d214be3-7960-42ea-9a92-9b48b84fee9c\") " pod="openshift-must-gather-dkpg7/crc-debug-krwwv" Dec 11 19:08:26 crc kubenswrapper[4877]: I1211 19:08:26.645864 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/crc-debug-krwwv" Dec 11 19:08:26 crc kubenswrapper[4877]: I1211 19:08:26.997886 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dkpg7/crc-debug-krwwv" event={"ID":"7d214be3-7960-42ea-9a92-9b48b84fee9c","Type":"ContainerStarted","Data":"69f1b3b4065848b2738c7948de3a466c5cf9b3b81815714061cec919a48809a6"} Dec 11 19:08:26 crc kubenswrapper[4877]: I1211 19:08:26.998232 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dkpg7/crc-debug-krwwv" event={"ID":"7d214be3-7960-42ea-9a92-9b48b84fee9c","Type":"ContainerStarted","Data":"fde6bfa17f536f033c1eeefd5b3cea606831ee263cf0573f614ad603fe9993d9"} Dec 11 19:08:27 crc kubenswrapper[4877]: I1211 19:08:27.019204 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dkpg7/crc-debug-krwwv" podStartSLOduration=1.019176525 podStartE2EDuration="1.019176525s" podCreationTimestamp="2025-12-11 19:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 19:08:27.011475541 +0000 UTC m=+4068.037719585" watchObservedRunningTime="2025-12-11 19:08:27.019176525 +0000 UTC m=+4068.045420579" Dec 11 19:08:27 crc kubenswrapper[4877]: I1211 19:08:27.215080 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:08:28 crc kubenswrapper[4877]: I1211 19:08:28.013326 4877 generic.go:334] "Generic (PLEG): container finished" podID="7d214be3-7960-42ea-9a92-9b48b84fee9c" containerID="69f1b3b4065848b2738c7948de3a466c5cf9b3b81815714061cec919a48809a6" exitCode=0 Dec 11 19:08:28 crc kubenswrapper[4877]: I1211 19:08:28.013413 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dkpg7/crc-debug-krwwv" event={"ID":"7d214be3-7960-42ea-9a92-9b48b84fee9c","Type":"ContainerDied","Data":"69f1b3b4065848b2738c7948de3a466c5cf9b3b81815714061cec919a48809a6"} Dec 11 19:08:28 crc kubenswrapper[4877]: I1211 19:08:28.017691 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"149237b968b52dd9cf7c2aa323775e7ffc4c12da161e6debfd35ffc67dcfc7b4"} Dec 11 19:08:29 crc kubenswrapper[4877]: I1211 19:08:29.117949 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/crc-debug-krwwv" Dec 11 19:08:29 crc kubenswrapper[4877]: I1211 19:08:29.246960 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct77b\" (UniqueName: \"kubernetes.io/projected/7d214be3-7960-42ea-9a92-9b48b84fee9c-kube-api-access-ct77b\") pod \"7d214be3-7960-42ea-9a92-9b48b84fee9c\" (UID: \"7d214be3-7960-42ea-9a92-9b48b84fee9c\") " Dec 11 19:08:29 crc kubenswrapper[4877]: I1211 19:08:29.247047 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d214be3-7960-42ea-9a92-9b48b84fee9c-host\") pod \"7d214be3-7960-42ea-9a92-9b48b84fee9c\" (UID: \"7d214be3-7960-42ea-9a92-9b48b84fee9c\") " Dec 11 19:08:29 crc kubenswrapper[4877]: I1211 19:08:29.247753 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d214be3-7960-42ea-9a92-9b48b84fee9c-host" (OuterVolumeSpecName: "host") pod "7d214be3-7960-42ea-9a92-9b48b84fee9c" (UID: "7d214be3-7960-42ea-9a92-9b48b84fee9c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 19:08:29 crc kubenswrapper[4877]: I1211 19:08:29.270454 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dkpg7/crc-debug-krwwv"] Dec 11 19:08:29 crc kubenswrapper[4877]: I1211 19:08:29.271560 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d214be3-7960-42ea-9a92-9b48b84fee9c-kube-api-access-ct77b" (OuterVolumeSpecName: "kube-api-access-ct77b") pod "7d214be3-7960-42ea-9a92-9b48b84fee9c" (UID: "7d214be3-7960-42ea-9a92-9b48b84fee9c"). InnerVolumeSpecName "kube-api-access-ct77b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:08:29 crc kubenswrapper[4877]: I1211 19:08:29.277047 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dkpg7/crc-debug-krwwv"] Dec 11 19:08:29 crc kubenswrapper[4877]: I1211 19:08:29.349095 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct77b\" (UniqueName: \"kubernetes.io/projected/7d214be3-7960-42ea-9a92-9b48b84fee9c-kube-api-access-ct77b\") on node \"crc\" DevicePath \"\"" Dec 11 19:08:29 crc kubenswrapper[4877]: I1211 19:08:29.349132 4877 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d214be3-7960-42ea-9a92-9b48b84fee9c-host\") on node \"crc\" DevicePath \"\"" Dec 11 19:08:30 crc kubenswrapper[4877]: I1211 19:08:30.037582 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fde6bfa17f536f033c1eeefd5b3cea606831ee263cf0573f614ad603fe9993d9" Dec 11 19:08:30 crc kubenswrapper[4877]: I1211 19:08:30.037825 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/crc-debug-krwwv" Dec 11 19:08:30 crc kubenswrapper[4877]: I1211 19:08:30.472362 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dkpg7/crc-debug-bpwjl"] Dec 11 19:08:30 crc kubenswrapper[4877]: E1211 19:08:30.473211 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d214be3-7960-42ea-9a92-9b48b84fee9c" containerName="container-00" Dec 11 19:08:30 crc kubenswrapper[4877]: I1211 19:08:30.473229 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d214be3-7960-42ea-9a92-9b48b84fee9c" containerName="container-00" Dec 11 19:08:30 crc kubenswrapper[4877]: I1211 19:08:30.473482 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d214be3-7960-42ea-9a92-9b48b84fee9c" containerName="container-00" Dec 11 19:08:30 crc kubenswrapper[4877]: I1211 19:08:30.474252 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/crc-debug-bpwjl" Dec 11 19:08:30 crc kubenswrapper[4877]: I1211 19:08:30.476164 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dkpg7"/"default-dockercfg-8lnxt" Dec 11 19:08:30 crc kubenswrapper[4877]: I1211 19:08:30.570963 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hnq2\" (UniqueName: \"kubernetes.io/projected/47183cf9-abf7-48ec-bc3e-909c4b2b9436-kube-api-access-2hnq2\") pod \"crc-debug-bpwjl\" (UID: \"47183cf9-abf7-48ec-bc3e-909c4b2b9436\") " pod="openshift-must-gather-dkpg7/crc-debug-bpwjl" Dec 11 19:08:30 crc kubenswrapper[4877]: I1211 19:08:30.571219 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47183cf9-abf7-48ec-bc3e-909c4b2b9436-host\") pod \"crc-debug-bpwjl\" (UID: \"47183cf9-abf7-48ec-bc3e-909c4b2b9436\") " pod="openshift-must-gather-dkpg7/crc-debug-bpwjl" Dec 11 19:08:30 crc kubenswrapper[4877]: I1211 19:08:30.672625 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47183cf9-abf7-48ec-bc3e-909c4b2b9436-host\") pod \"crc-debug-bpwjl\" (UID: \"47183cf9-abf7-48ec-bc3e-909c4b2b9436\") " pod="openshift-must-gather-dkpg7/crc-debug-bpwjl" Dec 11 19:08:30 crc kubenswrapper[4877]: I1211 19:08:30.672696 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hnq2\" (UniqueName: \"kubernetes.io/projected/47183cf9-abf7-48ec-bc3e-909c4b2b9436-kube-api-access-2hnq2\") pod \"crc-debug-bpwjl\" (UID: \"47183cf9-abf7-48ec-bc3e-909c4b2b9436\") " pod="openshift-must-gather-dkpg7/crc-debug-bpwjl" Dec 11 19:08:30 crc kubenswrapper[4877]: I1211 19:08:30.672810 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47183cf9-abf7-48ec-bc3e-909c4b2b9436-host\") pod \"crc-debug-bpwjl\" (UID: \"47183cf9-abf7-48ec-bc3e-909c4b2b9436\") " pod="openshift-must-gather-dkpg7/crc-debug-bpwjl" Dec 11 19:08:30 crc kubenswrapper[4877]: I1211 19:08:30.694425 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hnq2\" (UniqueName: \"kubernetes.io/projected/47183cf9-abf7-48ec-bc3e-909c4b2b9436-kube-api-access-2hnq2\") pod \"crc-debug-bpwjl\" (UID: \"47183cf9-abf7-48ec-bc3e-909c4b2b9436\") " pod="openshift-must-gather-dkpg7/crc-debug-bpwjl" Dec 11 19:08:30 crc kubenswrapper[4877]: I1211 19:08:30.793349 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/crc-debug-bpwjl" Dec 11 19:08:31 crc kubenswrapper[4877]: I1211 19:08:31.046849 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dkpg7/crc-debug-bpwjl" event={"ID":"47183cf9-abf7-48ec-bc3e-909c4b2b9436","Type":"ContainerStarted","Data":"a6c0612b26d922f3a6c2d0b04ced3313d8543459b50780b46a4670c1b5173918"} Dec 11 19:08:31 crc kubenswrapper[4877]: I1211 19:08:31.224160 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d214be3-7960-42ea-9a92-9b48b84fee9c" path="/var/lib/kubelet/pods/7d214be3-7960-42ea-9a92-9b48b84fee9c/volumes" Dec 11 19:08:32 crc kubenswrapper[4877]: I1211 19:08:32.060192 4877 generic.go:334] "Generic (PLEG): container finished" podID="47183cf9-abf7-48ec-bc3e-909c4b2b9436" containerID="f6ab4672ab1a2606b905e33bfd4fe0ef6f798a5b91fb596cc09379377aac4562" exitCode=0 Dec 11 19:08:32 crc kubenswrapper[4877]: I1211 19:08:32.060415 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dkpg7/crc-debug-bpwjl" event={"ID":"47183cf9-abf7-48ec-bc3e-909c4b2b9436","Type":"ContainerDied","Data":"f6ab4672ab1a2606b905e33bfd4fe0ef6f798a5b91fb596cc09379377aac4562"} Dec 11 19:08:32 crc kubenswrapper[4877]: I1211 19:08:32.105877 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dkpg7/crc-debug-bpwjl"] Dec 11 19:08:32 crc kubenswrapper[4877]: I1211 19:08:32.118167 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dkpg7/crc-debug-bpwjl"] Dec 11 19:08:33 crc kubenswrapper[4877]: I1211 19:08:33.220459 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/crc-debug-bpwjl" Dec 11 19:08:33 crc kubenswrapper[4877]: I1211 19:08:33.327845 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47183cf9-abf7-48ec-bc3e-909c4b2b9436-host\") pod \"47183cf9-abf7-48ec-bc3e-909c4b2b9436\" (UID: \"47183cf9-abf7-48ec-bc3e-909c4b2b9436\") " Dec 11 19:08:33 crc kubenswrapper[4877]: I1211 19:08:33.327903 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hnq2\" (UniqueName: \"kubernetes.io/projected/47183cf9-abf7-48ec-bc3e-909c4b2b9436-kube-api-access-2hnq2\") pod \"47183cf9-abf7-48ec-bc3e-909c4b2b9436\" (UID: \"47183cf9-abf7-48ec-bc3e-909c4b2b9436\") " Dec 11 19:08:33 crc kubenswrapper[4877]: I1211 19:08:33.328005 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47183cf9-abf7-48ec-bc3e-909c4b2b9436-host" (OuterVolumeSpecName: "host") pod "47183cf9-abf7-48ec-bc3e-909c4b2b9436" (UID: "47183cf9-abf7-48ec-bc3e-909c4b2b9436"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 19:08:33 crc kubenswrapper[4877]: I1211 19:08:33.328606 4877 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47183cf9-abf7-48ec-bc3e-909c4b2b9436-host\") on node \"crc\" DevicePath \"\"" Dec 11 19:08:33 crc kubenswrapper[4877]: I1211 19:08:33.340967 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47183cf9-abf7-48ec-bc3e-909c4b2b9436-kube-api-access-2hnq2" (OuterVolumeSpecName: "kube-api-access-2hnq2") pod "47183cf9-abf7-48ec-bc3e-909c4b2b9436" (UID: "47183cf9-abf7-48ec-bc3e-909c4b2b9436"). InnerVolumeSpecName "kube-api-access-2hnq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:08:33 crc kubenswrapper[4877]: I1211 19:08:33.430788 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hnq2\" (UniqueName: \"kubernetes.io/projected/47183cf9-abf7-48ec-bc3e-909c4b2b9436-kube-api-access-2hnq2\") on node \"crc\" DevicePath \"\"" Dec 11 19:08:34 crc kubenswrapper[4877]: I1211 19:08:34.086341 4877 scope.go:117] "RemoveContainer" containerID="f6ab4672ab1a2606b905e33bfd4fe0ef6f798a5b91fb596cc09379377aac4562" Dec 11 19:08:34 crc kubenswrapper[4877]: I1211 19:08:34.086363 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/crc-debug-bpwjl" Dec 11 19:08:35 crc kubenswrapper[4877]: I1211 19:08:35.228198 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47183cf9-abf7-48ec-bc3e-909c4b2b9436" path="/var/lib/kubelet/pods/47183cf9-abf7-48ec-bc3e-909c4b2b9436/volumes" Dec 11 19:08:59 crc kubenswrapper[4877]: I1211 19:08:59.739015 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f986c9df4-vbvbf_a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03/barbican-api/0.log" Dec 11 19:08:59 crc kubenswrapper[4877]: I1211 19:08:59.859196 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f986c9df4-vbvbf_a4b5abae-3d7b-4f39-b2bb-f71cf0b0aa03/barbican-api-log/0.log" Dec 11 19:08:59 crc kubenswrapper[4877]: I1211 19:08:59.903799 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-55b59dbf9b-n74fk_60930296-787e-4fea-8180-8b7d3aba29b8/barbican-keystone-listener/0.log" Dec 11 19:08:59 crc kubenswrapper[4877]: I1211 19:08:59.948300 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-55b59dbf9b-n74fk_60930296-787e-4fea-8180-8b7d3aba29b8/barbican-keystone-listener-log/0.log" Dec 11 19:09:00 crc kubenswrapper[4877]: I1211 19:09:00.067300 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-85794d5dd7-rmjkp_79f3b97f-f3f1-4547-81e4-e2c7c833745e/barbican-worker/0.log" Dec 11 19:09:00 crc kubenswrapper[4877]: I1211 19:09:00.121618 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-85794d5dd7-rmjkp_79f3b97f-f3f1-4547-81e4-e2c7c833745e/barbican-worker-log/0.log" Dec 11 19:09:00 crc kubenswrapper[4877]: I1211 19:09:00.276384 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-47j6k_3cbbcf61-700d-4648-b4b5-2d48ca9f1a5a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:00 crc kubenswrapper[4877]: I1211 19:09:00.355646 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2235a50-9478-4081-bdad-597e59773901/ceilometer-central-agent/0.log" Dec 11 19:09:00 crc kubenswrapper[4877]: I1211 19:09:00.418033 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2235a50-9478-4081-bdad-597e59773901/ceilometer-notification-agent/0.log" Dec 11 19:09:00 crc kubenswrapper[4877]: I1211 19:09:00.466295 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2235a50-9478-4081-bdad-597e59773901/proxy-httpd/0.log" Dec 11 19:09:00 crc kubenswrapper[4877]: I1211 19:09:00.521959 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f2235a50-9478-4081-bdad-597e59773901/sg-core/0.log" Dec 11 19:09:00 crc kubenswrapper[4877]: I1211 19:09:00.657333 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fcf8e591-86a6-4c17-89a0-9d93ec7bb590/cinder-api/0.log" Dec 11 19:09:00 crc kubenswrapper[4877]: I1211 19:09:00.674623 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fcf8e591-86a6-4c17-89a0-9d93ec7bb590/cinder-api-log/0.log" Dec 11 19:09:00 crc kubenswrapper[4877]: I1211 19:09:00.893189 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8/cinder-scheduler/0.log" Dec 11 19:09:00 crc kubenswrapper[4877]: I1211 19:09:00.955666 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_60e045b6-5bd7-4ad4-bde8-5a6bfa0e19b8/probe/0.log" Dec 11 19:09:01 crc kubenswrapper[4877]: I1211 19:09:01.016283 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-4twd7_ba5e024b-8ec8-4214-bca2-9dbf57f69623/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:01 crc kubenswrapper[4877]: I1211 19:09:01.181684 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-js6ck_728cbe41-aead-4492-bed9-312b93b70b88/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:01 crc kubenswrapper[4877]: I1211 19:09:01.234159 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-b6n82_aa89614b-79d3-467a-8b6a-0e5e28606a1a/init/0.log" Dec 11 19:09:01 crc kubenswrapper[4877]: I1211 19:09:01.380668 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-b6n82_aa89614b-79d3-467a-8b6a-0e5e28606a1a/init/0.log" Dec 11 19:09:01 crc kubenswrapper[4877]: I1211 19:09:01.455393 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-b6n82_aa89614b-79d3-467a-8b6a-0e5e28606a1a/dnsmasq-dns/0.log" Dec 11 19:09:01 crc kubenswrapper[4877]: I1211 19:09:01.492009 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hx5bp_2450d804-2d74-4d93-8a06-95190b0c8e94/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:01 crc kubenswrapper[4877]: I1211 19:09:01.627054 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a8011cc1-2d08-433e-bc2b-71f11aa75cd2/glance-httpd/0.log" Dec 11 19:09:01 crc kubenswrapper[4877]: I1211 19:09:01.688477 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a8011cc1-2d08-433e-bc2b-71f11aa75cd2/glance-log/0.log" Dec 11 19:09:01 crc kubenswrapper[4877]: I1211 19:09:01.858154 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_11c62194-8ad1-4529-98d8-7ad070a3ac30/glance-log/0.log" Dec 11 19:09:01 crc kubenswrapper[4877]: I1211 19:09:01.872760 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_11c62194-8ad1-4529-98d8-7ad070a3ac30/glance-httpd/0.log" Dec 11 19:09:01 crc kubenswrapper[4877]: I1211 19:09:01.995511 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-c9dbfd97b-ck4jv_2afc51b6-dafc-47ce-875a-3a6249f69b47/horizon/0.log" Dec 11 19:09:02 crc kubenswrapper[4877]: I1211 19:09:02.221237 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-f27jv_9dd6596b-9571-4ce0-8658-78d5f99fbb5a/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:02 crc kubenswrapper[4877]: I1211 19:09:02.388304 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-c68cr_cd857578-73bd-4b2b-b7ba-0b6a7058b48e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:02 crc kubenswrapper[4877]: I1211 19:09:02.415129 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-c9dbfd97b-ck4jv_2afc51b6-dafc-47ce-875a-3a6249f69b47/horizon-log/0.log" Dec 11 19:09:02 crc kubenswrapper[4877]: I1211 19:09:02.579619 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-c7754d7b9-8ngjr_68a8f0df-c9a5-4812-860f-492cfeeae4bb/keystone-api/0.log" Dec 11 19:09:02 crc kubenswrapper[4877]: I1211 19:09:02.631793 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29424661-j6vcg_cc6bcaa4-72c0-4cac-b05d-fe57d5086736/keystone-cron/0.log" Dec 11 19:09:02 crc kubenswrapper[4877]: I1211 19:09:02.932141 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_614a50f9-81ab-4bd7-a01d-f8074be6b773/kube-state-metrics/0.log" Dec 11 19:09:02 crc kubenswrapper[4877]: I1211 19:09:02.972607 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gbvcg_24092f15-2f1a-441e-a0b9-8bf295b95bd0/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:03 crc kubenswrapper[4877]: I1211 19:09:03.227464 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b4dd6dd9-6mv2v_4808e7d5-7e53-4b59-a46c-86838df224c0/neutron-api/0.log" Dec 11 19:09:03 crc kubenswrapper[4877]: I1211 19:09:03.244207 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b4dd6dd9-6mv2v_4808e7d5-7e53-4b59-a46c-86838df224c0/neutron-httpd/0.log" Dec 11 19:09:03 crc kubenswrapper[4877]: I1211 19:09:03.509661 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-7768d_cb1dfedd-c524-4375-9b91-d9f87e34e2d0/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:03 crc kubenswrapper[4877]: I1211 19:09:03.845952 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fd82f9ba-3316-4498-b434-e0eea4518646/nova-api-log/0.log" Dec 11 19:09:03 crc kubenswrapper[4877]: I1211 19:09:03.992884 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_c8fc48ab-7a71-46a1-9557-65c034c6af7e/nova-cell0-conductor-conductor/0.log" Dec 11 19:09:04 crc kubenswrapper[4877]: I1211 19:09:04.226785 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fd82f9ba-3316-4498-b434-e0eea4518646/nova-api-api/0.log" Dec 11 19:09:04 crc kubenswrapper[4877]: I1211 19:09:04.253506 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6d9e1d47-035b-4789-91ca-61940c628347/nova-cell1-conductor-conductor/0.log" Dec 11 19:09:04 crc kubenswrapper[4877]: I1211 19:09:04.333770 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_9ab5e774-7a03-4065-9eb8-c68aaff8d6c6/nova-cell1-novncproxy-novncproxy/0.log" Dec 11 19:09:04 crc kubenswrapper[4877]: I1211 19:09:04.458413 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-6b22p_27668d56-a427-4392-85d2-4e4cc52342aa/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:04 crc kubenswrapper[4877]: I1211 19:09:04.533752 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_09959cf5-104d-4577-b6ae-d710a75c4aaf/nova-metadata-log/0.log" Dec 11 19:09:04 crc kubenswrapper[4877]: I1211 19:09:04.929897 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ce9369cc-7934-4f85-9d10-e89f50e28710/mysql-bootstrap/0.log" Dec 11 19:09:04 crc kubenswrapper[4877]: I1211 19:09:04.978978 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1080b5f7-e7c7-4ec0-8ddf-22e4c37f56c6/nova-scheduler-scheduler/0.log" Dec 11 19:09:05 crc kubenswrapper[4877]: I1211 19:09:05.136280 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ce9369cc-7934-4f85-9d10-e89f50e28710/mysql-bootstrap/0.log" Dec 11 19:09:05 crc kubenswrapper[4877]: I1211 19:09:05.154450 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ce9369cc-7934-4f85-9d10-e89f50e28710/galera/0.log" Dec 11 19:09:05 crc kubenswrapper[4877]: I1211 19:09:05.337583 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e433a730-179e-4edf-93a9-9468b1714468/mysql-bootstrap/0.log" Dec 11 19:09:05 crc kubenswrapper[4877]: I1211 19:09:05.557200 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e433a730-179e-4edf-93a9-9468b1714468/mysql-bootstrap/0.log" Dec 11 19:09:05 crc kubenswrapper[4877]: I1211 19:09:05.576795 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e433a730-179e-4edf-93a9-9468b1714468/galera/0.log" Dec 11 19:09:06 crc kubenswrapper[4877]: I1211 19:09:06.004533 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_09959cf5-104d-4577-b6ae-d710a75c4aaf/nova-metadata-metadata/0.log" Dec 11 19:09:06 crc kubenswrapper[4877]: I1211 19:09:06.556925 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_22343eeb-fed7-457f-a507-a83d4071ee3a/openstackclient/0.log" Dec 11 19:09:06 crc kubenswrapper[4877]: I1211 19:09:06.568763 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-f2twl_7600f73e-c321-4ec0-af52-684b7b75ec9f/openstack-network-exporter/0.log" Dec 11 19:09:06 crc kubenswrapper[4877]: I1211 19:09:06.927818 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6g4mt_2cfb93fc-8582-42dd-8c57-afd3fcd25b40/ovsdb-server-init/0.log" Dec 11 19:09:07 crc kubenswrapper[4877]: I1211 19:09:07.082410 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6g4mt_2cfb93fc-8582-42dd-8c57-afd3fcd25b40/ovsdb-server-init/0.log" Dec 11 19:09:07 crc kubenswrapper[4877]: I1211 19:09:07.155895 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6g4mt_2cfb93fc-8582-42dd-8c57-afd3fcd25b40/ovs-vswitchd/0.log" Dec 11 19:09:07 crc kubenswrapper[4877]: I1211 19:09:07.182152 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6g4mt_2cfb93fc-8582-42dd-8c57-afd3fcd25b40/ovsdb-server/0.log" Dec 11 19:09:07 crc kubenswrapper[4877]: I1211 19:09:07.268158 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zdz6c_efc5ef2c-fcea-4de5-a085-47ff35a33522/ovn-controller/0.log" Dec 11 19:09:07 crc kubenswrapper[4877]: I1211 19:09:07.394500 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-x5zzb_3346dff0-5931-4f19-817b-bc38011e3718/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:07 crc kubenswrapper[4877]: I1211 19:09:07.524862 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c38d6b86-ecb2-47de-a1f3-6670ff0eb78b/openstack-network-exporter/0.log" Dec 11 19:09:07 crc kubenswrapper[4877]: I1211 19:09:07.612492 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c38d6b86-ecb2-47de-a1f3-6670ff0eb78b/ovn-northd/0.log" Dec 11 19:09:08 crc kubenswrapper[4877]: I1211 19:09:08.124263 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b163a060-e7a9-4e81-992b-a9c72bbac544/openstack-network-exporter/0.log" Dec 11 19:09:08 crc kubenswrapper[4877]: I1211 19:09:08.162453 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b163a060-e7a9-4e81-992b-a9c72bbac544/ovsdbserver-nb/0.log" Dec 11 19:09:08 crc kubenswrapper[4877]: I1211 19:09:08.227616 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde/openstack-network-exporter/0.log" Dec 11 19:09:08 crc kubenswrapper[4877]: I1211 19:09:08.324746 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d65c0ec2-40dc-4d0f-97f7-e8e74ceffdde/ovsdbserver-sb/0.log" Dec 11 19:09:08 crc kubenswrapper[4877]: I1211 19:09:08.563585 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77d988cd48-2h828_5f1de16a-c21b-4876-99ca-60b34d1b7e75/placement-api/0.log" Dec 11 19:09:08 crc kubenswrapper[4877]: I1211 19:09:08.678024 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77d988cd48-2h828_5f1de16a-c21b-4876-99ca-60b34d1b7e75/placement-log/0.log" Dec 11 19:09:08 crc kubenswrapper[4877]: I1211 19:09:08.686485 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9c488377-3b02-4126-b40d-6b8568352c77/setup-container/0.log" Dec 11 19:09:08 crc kubenswrapper[4877]: I1211 19:09:08.861889 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9c488377-3b02-4126-b40d-6b8568352c77/setup-container/0.log" Dec 11 19:09:08 crc kubenswrapper[4877]: I1211 19:09:08.919868 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4e630b01-bd78-44dc-bdc6-82a0bad7825c/setup-container/0.log" Dec 11 19:09:09 crc kubenswrapper[4877]: I1211 19:09:09.003843 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9c488377-3b02-4126-b40d-6b8568352c77/rabbitmq/0.log" Dec 11 19:09:09 crc kubenswrapper[4877]: I1211 19:09:09.167221 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4e630b01-bd78-44dc-bdc6-82a0bad7825c/setup-container/0.log" Dec 11 19:09:09 crc kubenswrapper[4877]: I1211 19:09:09.196616 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4e630b01-bd78-44dc-bdc6-82a0bad7825c/rabbitmq/0.log" Dec 11 19:09:09 crc kubenswrapper[4877]: I1211 19:09:09.266436 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2sxxs_c4fd09ef-4694-42ad-b7fa-17a721fec8f3/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:09 crc kubenswrapper[4877]: I1211 19:09:09.458567 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-wnw2w_153cc7ad-8854-4f42-80cd-2fcdb2f453cd/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:09 crc kubenswrapper[4877]: I1211 19:09:09.470339 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-shd5r_db547bdf-a5ee-410d-8a44-7bc5af05321d/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:09 crc kubenswrapper[4877]: I1211 19:09:09.677282 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-b9jsb_a2ef07f1-1d19-403b-a68c-0092e8030adb/ssh-known-hosts-edpm-deployment/0.log" Dec 11 19:09:09 crc kubenswrapper[4877]: I1211 19:09:09.701191 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-b4lh7_f6dc6f9d-c3d7-45da-98ca-e00538c9680e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:09 crc kubenswrapper[4877]: I1211 19:09:09.862434 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-79885c8c-7qj69_6776094e-cd5a-4539-9b5c-368030c70458/proxy-server/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.026964 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-79885c8c-7qj69_6776094e-cd5a-4539-9b5c-368030c70458/proxy-httpd/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.058237 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vdnmw_daa2f87b-1f8a-423e-88f1-17150ab15ba0/swift-ring-rebalance/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.202066 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/account-auditor/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.260497 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/account-reaper/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.349781 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/account-replicator/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.408899 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/account-server/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.419608 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/container-auditor/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.487869 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/container-replicator/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.618502 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/container-server/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.632925 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/object-auditor/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.673769 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/container-updater/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.763529 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/object-expirer/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.909909 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/object-updater/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.956278 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/object-replicator/0.log" Dec 11 19:09:10 crc kubenswrapper[4877]: I1211 19:09:10.973850 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/object-server/0.log" Dec 11 19:09:11 crc kubenswrapper[4877]: I1211 19:09:11.007670 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/rsync/0.log" Dec 11 19:09:11 crc kubenswrapper[4877]: I1211 19:09:11.125998 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c6eb39a0-5f8c-44d1-b27e-c946c850a539/swift-recon-cron/0.log" Dec 11 19:09:11 crc kubenswrapper[4877]: I1211 19:09:11.239151 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-cc7pq_352625c8-a275-44c6-9758-962aa05194b1/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:11 crc kubenswrapper[4877]: I1211 19:09:11.400512 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_60eefae6-1396-4f0a-b52a-7827dca29fb3/tempest-tests-tempest-tests-runner/0.log" Dec 11 19:09:11 crc kubenswrapper[4877]: I1211 19:09:11.533563 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_4dc36034-313e-409f-86f4-e69f9ae0ee24/test-operator-logs-container/0.log" Dec 11 19:09:11 crc kubenswrapper[4877]: I1211 19:09:11.637740 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-h2bv5_18c92635-2d69-45d4-b25a-8a67a228e11c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 19:09:20 crc kubenswrapper[4877]: I1211 19:09:20.839337 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ee9c80e6-7afc-495d-85c0-9c3b64d26df5/memcached/0.log" Dec 11 19:09:31 crc kubenswrapper[4877]: I1211 19:09:31.137517 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": dial tcp 10.217.0.72:8081: connect: connection refused" Dec 11 19:09:31 crc kubenswrapper[4877]: I1211 19:09:31.606735 4877 generic.go:334] "Generic (PLEG): container finished" podID="53a860ae-4169-4f47-8ba7-032c96b4be3a" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" exitCode=1 Dec 11 19:09:31 crc kubenswrapper[4877]: I1211 19:09:31.606784 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerDied","Data":"bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de"} Dec 11 19:09:31 crc kubenswrapper[4877]: I1211 19:09:31.606829 4877 scope.go:117] "RemoveContainer" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:09:31 crc kubenswrapper[4877]: I1211 19:09:31.607926 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:09:31 crc kubenswrapper[4877]: E1211 19:09:31.608631 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:09:39 crc kubenswrapper[4877]: I1211 19:09:39.545047 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52_d106ca54-ea59-4bc1-9b2e-309981ea2055/util/0.log" Dec 11 19:09:39 crc kubenswrapper[4877]: I1211 19:09:39.701164 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52_d106ca54-ea59-4bc1-9b2e-309981ea2055/util/0.log" Dec 11 19:09:39 crc kubenswrapper[4877]: I1211 19:09:39.709608 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52_d106ca54-ea59-4bc1-9b2e-309981ea2055/pull/0.log" Dec 11 19:09:39 crc kubenswrapper[4877]: I1211 19:09:39.765916 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52_d106ca54-ea59-4bc1-9b2e-309981ea2055/pull/0.log" Dec 11 19:09:39 crc kubenswrapper[4877]: I1211 19:09:39.877646 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52_d106ca54-ea59-4bc1-9b2e-309981ea2055/pull/0.log" Dec 11 19:09:39 crc kubenswrapper[4877]: I1211 19:09:39.894774 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52_d106ca54-ea59-4bc1-9b2e-309981ea2055/util/0.log" Dec 11 19:09:39 crc kubenswrapper[4877]: I1211 19:09:39.905579 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_196b5055f489744998e7b54c5159ab38024aa6952b4f631e0db2214a0anwg52_d106ca54-ea59-4bc1-9b2e-309981ea2055/extract/0.log" Dec 11 19:09:40 crc kubenswrapper[4877]: I1211 19:09:40.052716 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-ddr9m_c3dd6849-836b-462c-abbc-d97418287658/kube-rbac-proxy/0.log" Dec 11 19:09:40 crc kubenswrapper[4877]: I1211 19:09:40.115358 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-ddr9m_c3dd6849-836b-462c-abbc-d97418287658/manager/0.log" Dec 11 19:09:40 crc kubenswrapper[4877]: I1211 19:09:40.208511 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-h429c_444a3f0d-8828-4958-9d25-61f4251d74c4/kube-rbac-proxy/0.log" Dec 11 19:09:40 crc kubenswrapper[4877]: I1211 19:09:40.297261 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-h429c_444a3f0d-8828-4958-9d25-61f4251d74c4/manager/0.log" Dec 11 19:09:40 crc kubenswrapper[4877]: I1211 19:09:40.334435 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-hsvv2_105c0e29-3d26-49ee-83f1-9ac47ec17cfd/kube-rbac-proxy/0.log" Dec 11 19:09:40 crc kubenswrapper[4877]: I1211 19:09:40.421738 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-hsvv2_105c0e29-3d26-49ee-83f1-9ac47ec17cfd/manager/0.log" Dec 11 19:09:40 crc kubenswrapper[4877]: I1211 19:09:40.489945 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-nf9dm_3670e0c0-f188-4f22-8097-52f0a00b3a47/kube-rbac-proxy/0.log" Dec 11 19:09:40 crc kubenswrapper[4877]: I1211 19:09:40.578608 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-nf9dm_3670e0c0-f188-4f22-8097-52f0a00b3a47/manager/0.log" Dec 11 19:09:40 crc kubenswrapper[4877]: I1211 19:09:40.663676 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-c4nfg_fd25a8fc-0f52-4795-8a65-debdfdf452b3/kube-rbac-proxy/0.log" Dec 11 19:09:40 crc kubenswrapper[4877]: I1211 19:09:40.671397 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-c4nfg_fd25a8fc-0f52-4795-8a65-debdfdf452b3/manager/0.log" Dec 11 19:09:40 crc kubenswrapper[4877]: I1211 19:09:40.839939 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-xtk9z_18dda364-66d0-47d3-8c03-4b0ecb73a634/manager/0.log" Dec 11 19:09:40 crc kubenswrapper[4877]: I1211 19:09:40.846171 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-xtk9z_18dda364-66d0-47d3-8c03-4b0ecb73a634/kube-rbac-proxy/0.log" Dec 11 19:09:40 crc kubenswrapper[4877]: I1211 19:09:40.997688 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6797f5b887-q9vgk_53a860ae-4169-4f47-8ba7-032c96b4be3a/kube-rbac-proxy/0.log" Dec 11 19:09:41 crc kubenswrapper[4877]: E1211 19:09:41.011902 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68\": container with ID starting with e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68 not found: ID does not exist" containerID="e16485d9503271f8cac41dff38daf05cdeb6ca5cfd76e9c87cb58e393930cb68" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.029739 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6797f5b887-q9vgk_53a860ae-4169-4f47-8ba7-032c96b4be3a/manager/10.log" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.137768 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.137824 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.138508 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:09:41 crc kubenswrapper[4877]: E1211 19:09:41.138822 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.180361 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-mcjpd_2fb4fbf5-e490-43ae-b7c5-8a2e481f7209/manager/0.log" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.186946 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-mcjpd_2fb4fbf5-e490-43ae-b7c5-8a2e481f7209/kube-rbac-proxy/0.log" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.391961 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-tlsrq_6dcd317e-41f9-45e8-bd14-77d9f4ae25dd/manager/0.log" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.403614 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-tlsrq_6dcd317e-41f9-45e8-bd14-77d9f4ae25dd/kube-rbac-proxy/0.log" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.444974 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-nprs8_e0378195-6809-4f5c-b9f3-a37177789ee5/kube-rbac-proxy/0.log" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.534906 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-nprs8_e0378195-6809-4f5c-b9f3-a37177789ee5/manager/0.log" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.602053 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-q25n5_f7fa49af-7b01-4972-aec4-5b2b42dee85f/kube-rbac-proxy/0.log" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.658785 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-q25n5_f7fa49af-7b01-4972-aec4-5b2b42dee85f/manager/0.log" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.777130 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-qn5w6_243bfab6-eced-4740-87ce-ab61441881f5/kube-rbac-proxy/0.log" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.837146 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-qn5w6_243bfab6-eced-4740-87ce-ab61441881f5/manager/0.log" Dec 11 19:09:41 crc kubenswrapper[4877]: I1211 19:09:41.939288 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-8vf65_099a6c32-cea0-4cea-b763-f60ba3e867e7/kube-rbac-proxy/0.log" Dec 11 19:09:42 crc kubenswrapper[4877]: I1211 19:09:42.052862 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-8vf65_099a6c32-cea0-4cea-b763-f60ba3e867e7/manager/0.log" Dec 11 19:09:42 crc kubenswrapper[4877]: I1211 19:09:42.122933 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-p97xh_efcf4499-fc58-4b4c-b047-c397b6154e38/kube-rbac-proxy/0.log" Dec 11 19:09:42 crc kubenswrapper[4877]: I1211 19:09:42.151877 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-p97xh_efcf4499-fc58-4b4c-b047-c397b6154e38/manager/0.log" Dec 11 19:09:42 crc kubenswrapper[4877]: I1211 19:09:42.282098 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f4zhrq_e525cb88-4985-4374-a7f8-185c016e4a14/kube-rbac-proxy/0.log" Dec 11 19:09:42 crc kubenswrapper[4877]: I1211 19:09:42.314123 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f4zhrq_e525cb88-4985-4374-a7f8-185c016e4a14/manager/0.log" Dec 11 19:09:42 crc kubenswrapper[4877]: I1211 19:09:42.627531 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nmcl7_05174a0d-198b-4dbc-847c-164453075d91/registry-server/0.log" Dec 11 19:09:42 crc kubenswrapper[4877]: I1211 19:09:42.831736 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-7pc7q_1ebbc540-13e3-4fee-a9b7-10bb95da50b9/kube-rbac-proxy/0.log" Dec 11 19:09:42 crc kubenswrapper[4877]: I1211 19:09:42.868894 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-7pc7q_1ebbc540-13e3-4fee-a9b7-10bb95da50b9/manager/0.log" Dec 11 19:09:42 crc kubenswrapper[4877]: I1211 19:09:42.886721 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-8c9b75f7c-ccsvg_fcdb3a3a-e3a4-42ce-a44a-eb34c2eda169/operator/0.log" Dec 11 19:09:43 crc kubenswrapper[4877]: I1211 19:09:43.000648 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-wldgs_8675dcf8-097e-4927-aa50-827f3034af41/kube-rbac-proxy/0.log" Dec 11 19:09:43 crc kubenswrapper[4877]: I1211 19:09:43.074032 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-wldgs_8675dcf8-097e-4927-aa50-827f3034af41/manager/0.log" Dec 11 19:09:43 crc kubenswrapper[4877]: I1211 19:09:43.186138 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7cg2z_15a74efd-e36a-4946-a9e5-2453c98355aa/operator/0.log" Dec 11 19:09:43 crc kubenswrapper[4877]: I1211 19:09:43.284439 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-dmlvl_ca3c6f4e-3491-4109-bf60-f4efbad58bc1/kube-rbac-proxy/0.log" Dec 11 19:09:43 crc kubenswrapper[4877]: I1211 19:09:43.356988 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-dmlvl_ca3c6f4e-3491-4109-bf60-f4efbad58bc1/manager/0.log" Dec 11 19:09:43 crc kubenswrapper[4877]: I1211 19:09:43.447864 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-d77ht_a571f5fa-fa44-48fd-b675-f0b42607ac7d/kube-rbac-proxy/0.log" Dec 11 19:09:43 crc kubenswrapper[4877]: I1211 19:09:43.562558 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-d77ht_a571f5fa-fa44-48fd-b675-f0b42607ac7d/manager/0.log" Dec 11 19:09:43 crc kubenswrapper[4877]: I1211 19:09:43.593973 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-545595b497-h5vf4_c21c4469-97a3-47c7-bced-d7d18aa1008a/manager/0.log" Dec 11 19:09:43 crc kubenswrapper[4877]: I1211 19:09:43.712263 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-9sw8b_105e535b-6aee-4187-9008-65a41e6e3572/kube-rbac-proxy/0.log" Dec 11 19:09:43 crc kubenswrapper[4877]: I1211 19:09:43.769809 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-9sw8b_105e535b-6aee-4187-9008-65a41e6e3572/manager/0.log" Dec 11 19:09:43 crc kubenswrapper[4877]: I1211 19:09:43.866020 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-zm6ts_35acba78-7a75-40d7-b5fb-43d595c3bc1f/kube-rbac-proxy/0.log" Dec 11 19:09:43 crc kubenswrapper[4877]: I1211 19:09:43.935830 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-zm6ts_35acba78-7a75-40d7-b5fb-43d595c3bc1f/manager/0.log" Dec 11 19:09:55 crc kubenswrapper[4877]: I1211 19:09:55.216463 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:09:55 crc kubenswrapper[4877]: E1211 19:09:55.217243 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:10:05 crc kubenswrapper[4877]: I1211 19:10:05.336195 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kntl9_98cc800c-b8b6-49f9-94c0-42bb0c22eb76/control-plane-machine-set-operator/0.log" Dec 11 19:10:05 crc kubenswrapper[4877]: I1211 19:10:05.444971 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4c92n_1e05b4cd-2477-4d69-804f-c8dc59d6da3d/kube-rbac-proxy/0.log" Dec 11 19:10:05 crc kubenswrapper[4877]: I1211 19:10:05.466869 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4c92n_1e05b4cd-2477-4d69-804f-c8dc59d6da3d/machine-api-operator/0.log" Dec 11 19:10:10 crc kubenswrapper[4877]: I1211 19:10:10.215835 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:10:10 crc kubenswrapper[4877]: E1211 19:10:10.216852 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:10:19 crc kubenswrapper[4877]: I1211 19:10:19.967726 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-fsjfl_1212481e-8248-4bfe-903d-b6b08b87ead6/cert-manager-controller/0.log" Dec 11 19:10:20 crc kubenswrapper[4877]: I1211 19:10:20.076686 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-qzspd_a89aaf35-60f1-481e-9f2f-4bcf0f70cec7/cert-manager-webhook/0.log" Dec 11 19:10:20 crc kubenswrapper[4877]: I1211 19:10:20.107741 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-s4sld_57038dbe-549d-4b29-b24a-4d32261c3a50/cert-manager-cainjector/0.log" Dec 11 19:10:21 crc kubenswrapper[4877]: I1211 19:10:21.215787 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:10:21 crc kubenswrapper[4877]: E1211 19:10:21.216643 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:10:33 crc kubenswrapper[4877]: I1211 19:10:33.216185 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:10:33 crc kubenswrapper[4877]: E1211 19:10:33.217262 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:10:34 crc kubenswrapper[4877]: I1211 19:10:34.975446 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-89bls_1eb218c2-7bcc-411b-90e3-ab813f9739a4/nmstate-console-plugin/0.log" Dec 11 19:10:35 crc kubenswrapper[4877]: I1211 19:10:35.455424 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-229td_a304c336-7461-4570-a515-4ae4c7d2cebd/nmstate-handler/0.log" Dec 11 19:10:35 crc kubenswrapper[4877]: I1211 19:10:35.487733 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-g7dnb_3caa2a2a-a894-44ae-8b4d-8bca5b08d582/nmstate-metrics/0.log" Dec 11 19:10:35 crc kubenswrapper[4877]: I1211 19:10:35.493025 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-g7dnb_3caa2a2a-a894-44ae-8b4d-8bca5b08d582/kube-rbac-proxy/0.log" Dec 11 19:10:35 crc kubenswrapper[4877]: I1211 19:10:35.659682 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-kkxhc_e132a5e4-7ab3-4161-bc40-3d20fc57dab7/nmstate-operator/0.log" Dec 11 19:10:35 crc kubenswrapper[4877]: I1211 19:10:35.687175 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-zxfhk_51a24aa4-4100-4dee-9b55-72a9c14f4859/nmstate-webhook/0.log" Dec 11 19:10:46 crc kubenswrapper[4877]: I1211 19:10:46.638243 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 19:10:46 crc kubenswrapper[4877]: I1211 19:10:46.638942 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 19:10:48 crc kubenswrapper[4877]: I1211 19:10:48.216277 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:10:48 crc kubenswrapper[4877]: E1211 19:10:48.217090 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:10:51 crc kubenswrapper[4877]: I1211 19:10:51.787636 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-jtbmt_786d79d5-52c0-410a-b4e4-b3df71e617ba/kube-rbac-proxy/0.log" Dec 11 19:10:51 crc kubenswrapper[4877]: I1211 19:10:51.874171 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-jtbmt_786d79d5-52c0-410a-b4e4-b3df71e617ba/controller/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.000833 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-frr-files/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.185294 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-frr-files/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.201833 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-metrics/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.203954 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-reloader/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.209773 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-reloader/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.371638 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-reloader/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.393821 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-frr-files/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.397187 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-metrics/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.399606 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-metrics/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.599729 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-metrics/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.599839 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-frr-files/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.599850 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/cp-reloader/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.626064 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/controller/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.771905 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/kube-rbac-proxy/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.803037 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/frr-metrics/0.log" Dec 11 19:10:52 crc kubenswrapper[4877]: I1211 19:10:52.832235 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/kube-rbac-proxy-frr/0.log" Dec 11 19:10:53 crc kubenswrapper[4877]: I1211 19:10:53.035462 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/reloader/0.log" Dec 11 19:10:53 crc kubenswrapper[4877]: I1211 19:10:53.058676 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-v5sxh_d3dad0c6-8977-4cff-9866-e95d84ccc658/frr-k8s-webhook-server/0.log" Dec 11 19:10:53 crc kubenswrapper[4877]: I1211 19:10:53.307150 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-554f49ddd5-7c57c_ab8873f5-e97e-483c-a6f4-dad1a15fb382/manager/0.log" Dec 11 19:10:53 crc kubenswrapper[4877]: I1211 19:10:53.499943 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9hdjf_661affbe-08b4-406d-b4e4-78cbefa4de67/kube-rbac-proxy/0.log" Dec 11 19:10:53 crc kubenswrapper[4877]: I1211 19:10:53.557723 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-bf4cbf554-xwtvp_36cfd319-6f46-4547-bc92-6d8f108f556b/webhook-server/0.log" Dec 11 19:10:54 crc kubenswrapper[4877]: I1211 19:10:54.142681 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9hdjf_661affbe-08b4-406d-b4e4-78cbefa4de67/speaker/0.log" Dec 11 19:10:54 crc kubenswrapper[4877]: I1211 19:10:54.284778 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2sg4m_e75b60e5-f320-42d0-8ebc-4aa90962ced4/frr/0.log" Dec 11 19:10:59 crc kubenswrapper[4877]: I1211 19:10:59.228351 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:10:59 crc kubenswrapper[4877]: E1211 19:10:59.229489 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:11:09 crc kubenswrapper[4877]: I1211 19:11:09.434570 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx_cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9/util/0.log" Dec 11 19:11:09 crc kubenswrapper[4877]: I1211 19:11:09.606193 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx_cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9/util/0.log" Dec 11 19:11:09 crc kubenswrapper[4877]: I1211 19:11:09.619937 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx_cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9/pull/0.log" Dec 11 19:11:09 crc kubenswrapper[4877]: I1211 19:11:09.621735 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx_cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9/pull/0.log" Dec 11 19:11:09 crc kubenswrapper[4877]: I1211 19:11:09.779247 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx_cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9/util/0.log" Dec 11 19:11:09 crc kubenswrapper[4877]: I1211 19:11:09.780871 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx_cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9/pull/0.log" Dec 11 19:11:09 crc kubenswrapper[4877]: I1211 19:11:09.800700 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4xrqdx_cd0b28b4-1e6e-45dc-8ed7-74c641bccaf9/extract/0.log" Dec 11 19:11:09 crc kubenswrapper[4877]: I1211 19:11:09.932465 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5_a3671635-4e9b-4c74-85a5-98480f49249a/util/0.log" Dec 11 19:11:10 crc kubenswrapper[4877]: I1211 19:11:10.076853 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5_a3671635-4e9b-4c74-85a5-98480f49249a/pull/0.log" Dec 11 19:11:10 crc kubenswrapper[4877]: I1211 19:11:10.097159 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5_a3671635-4e9b-4c74-85a5-98480f49249a/util/0.log" Dec 11 19:11:10 crc kubenswrapper[4877]: I1211 19:11:10.098723 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5_a3671635-4e9b-4c74-85a5-98480f49249a/pull/0.log" Dec 11 19:11:10 crc kubenswrapper[4877]: I1211 19:11:10.561748 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5_a3671635-4e9b-4c74-85a5-98480f49249a/pull/0.log" Dec 11 19:11:10 crc kubenswrapper[4877]: I1211 19:11:10.573901 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5_a3671635-4e9b-4c74-85a5-98480f49249a/util/0.log" Dec 11 19:11:10 crc kubenswrapper[4877]: I1211 19:11:10.598201 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8mfjx5_a3671635-4e9b-4c74-85a5-98480f49249a/extract/0.log" Dec 11 19:11:10 crc kubenswrapper[4877]: I1211 19:11:10.739164 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-df92c_00bd5c61-a49a-4020-8e9a-fa130c65c7e2/extract-utilities/0.log" Dec 11 19:11:10 crc kubenswrapper[4877]: I1211 19:11:10.924057 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-df92c_00bd5c61-a49a-4020-8e9a-fa130c65c7e2/extract-utilities/0.log" Dec 11 19:11:10 crc kubenswrapper[4877]: I1211 19:11:10.959958 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-df92c_00bd5c61-a49a-4020-8e9a-fa130c65c7e2/extract-content/0.log" Dec 11 19:11:10 crc kubenswrapper[4877]: I1211 19:11:10.977741 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-df92c_00bd5c61-a49a-4020-8e9a-fa130c65c7e2/extract-content/0.log" Dec 11 19:11:11 crc kubenswrapper[4877]: I1211 19:11:11.100872 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-df92c_00bd5c61-a49a-4020-8e9a-fa130c65c7e2/extract-utilities/0.log" Dec 11 19:11:11 crc kubenswrapper[4877]: I1211 19:11:11.104330 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-df92c_00bd5c61-a49a-4020-8e9a-fa130c65c7e2/extract-content/0.log" Dec 11 19:11:11 crc kubenswrapper[4877]: I1211 19:11:11.214876 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:11:11 crc kubenswrapper[4877]: E1211 19:11:11.215120 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:11:11 crc kubenswrapper[4877]: I1211 19:11:11.330042 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5924z_d0250f8a-6910-4bd8-a583-f772807319f1/extract-utilities/0.log" Dec 11 19:11:11 crc kubenswrapper[4877]: I1211 19:11:11.479911 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-df92c_00bd5c61-a49a-4020-8e9a-fa130c65c7e2/registry-server/0.log" Dec 11 19:11:11 crc kubenswrapper[4877]: I1211 19:11:11.559718 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5924z_d0250f8a-6910-4bd8-a583-f772807319f1/extract-content/0.log" Dec 11 19:11:11 crc kubenswrapper[4877]: I1211 19:11:11.593155 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5924z_d0250f8a-6910-4bd8-a583-f772807319f1/extract-content/0.log" Dec 11 19:11:11 crc kubenswrapper[4877]: I1211 19:11:11.600600 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5924z_d0250f8a-6910-4bd8-a583-f772807319f1/extract-utilities/0.log" Dec 11 19:11:11 crc kubenswrapper[4877]: I1211 19:11:11.790182 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5924z_d0250f8a-6910-4bd8-a583-f772807319f1/extract-utilities/0.log" Dec 11 19:11:11 crc kubenswrapper[4877]: I1211 19:11:11.799141 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5924z_d0250f8a-6910-4bd8-a583-f772807319f1/extract-content/0.log" Dec 11 19:11:12 crc kubenswrapper[4877]: I1211 19:11:12.005889 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4qvdz_1eaf037c-b9a9-4c1b-b108-0ffcad610322/marketplace-operator/1.log" Dec 11 19:11:12 crc kubenswrapper[4877]: I1211 19:11:12.023236 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4qvdz_1eaf037c-b9a9-4c1b-b108-0ffcad610322/marketplace-operator/2.log" Dec 11 19:11:12 crc kubenswrapper[4877]: I1211 19:11:12.189922 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbv6k_c1003f02-48c3-4729-8720-0e23ffb4b8dd/extract-utilities/0.log" Dec 11 19:11:12 crc kubenswrapper[4877]: I1211 19:11:12.413899 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbv6k_c1003f02-48c3-4729-8720-0e23ffb4b8dd/extract-utilities/0.log" Dec 11 19:11:12 crc kubenswrapper[4877]: I1211 19:11:12.448264 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbv6k_c1003f02-48c3-4729-8720-0e23ffb4b8dd/extract-content/0.log" Dec 11 19:11:12 crc kubenswrapper[4877]: I1211 19:11:12.485642 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5924z_d0250f8a-6910-4bd8-a583-f772807319f1/registry-server/0.log" Dec 11 19:11:12 crc kubenswrapper[4877]: I1211 19:11:12.491531 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbv6k_c1003f02-48c3-4729-8720-0e23ffb4b8dd/extract-content/0.log" Dec 11 19:11:12 crc kubenswrapper[4877]: I1211 19:11:12.637657 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2k25c_fbba94b5-69fd-4956-b1f7-8e397630c287/extract-utilities/0.log" Dec 11 19:11:12 crc kubenswrapper[4877]: I1211 19:11:12.668897 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbv6k_c1003f02-48c3-4729-8720-0e23ffb4b8dd/extract-content/0.log" Dec 11 19:11:12 crc kubenswrapper[4877]: I1211 19:11:12.675971 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbv6k_c1003f02-48c3-4729-8720-0e23ffb4b8dd/extract-utilities/0.log" Dec 11 19:11:12 crc kubenswrapper[4877]: I1211 19:11:12.805666 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbv6k_c1003f02-48c3-4729-8720-0e23ffb4b8dd/registry-server/0.log" Dec 11 19:11:12 crc kubenswrapper[4877]: I1211 19:11:12.868390 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2k25c_fbba94b5-69fd-4956-b1f7-8e397630c287/extract-content/0.log" Dec 11 19:11:12 crc kubenswrapper[4877]: I1211 19:11:12.871136 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2k25c_fbba94b5-69fd-4956-b1f7-8e397630c287/extract-content/0.log" Dec 11 19:11:12 crc kubenswrapper[4877]: I1211 19:11:12.872675 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2k25c_fbba94b5-69fd-4956-b1f7-8e397630c287/extract-utilities/0.log" Dec 11 19:11:13 crc kubenswrapper[4877]: I1211 19:11:13.055607 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2k25c_fbba94b5-69fd-4956-b1f7-8e397630c287/extract-utilities/0.log" Dec 11 19:11:13 crc kubenswrapper[4877]: I1211 19:11:13.063082 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2k25c_fbba94b5-69fd-4956-b1f7-8e397630c287/extract-content/0.log" Dec 11 19:11:13 crc kubenswrapper[4877]: I1211 19:11:13.241603 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2k25c_fbba94b5-69fd-4956-b1f7-8e397630c287/registry-server/0.log" Dec 11 19:11:16 crc kubenswrapper[4877]: I1211 19:11:16.638520 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 19:11:16 crc kubenswrapper[4877]: I1211 19:11:16.639475 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 19:11:22 crc kubenswrapper[4877]: I1211 19:11:22.215899 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:11:22 crc kubenswrapper[4877]: E1211 19:11:22.216581 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:25.999631 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lc452"] Dec 11 19:11:26 crc kubenswrapper[4877]: E1211 19:11:26.003334 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47183cf9-abf7-48ec-bc3e-909c4b2b9436" containerName="container-00" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.003358 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="47183cf9-abf7-48ec-bc3e-909c4b2b9436" containerName="container-00" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.003601 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="47183cf9-abf7-48ec-bc3e-909c4b2b9436" containerName="container-00" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.005053 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.017402 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lc452"] Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.176612 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbrlq\" (UniqueName: \"kubernetes.io/projected/96078cd8-6689-413e-ab9c-1ecefd96e3d5-kube-api-access-rbrlq\") pod \"redhat-operators-lc452\" (UID: \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\") " pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.177059 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96078cd8-6689-413e-ab9c-1ecefd96e3d5-utilities\") pod \"redhat-operators-lc452\" (UID: \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\") " pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.177228 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96078cd8-6689-413e-ab9c-1ecefd96e3d5-catalog-content\") pod \"redhat-operators-lc452\" (UID: \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\") " pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.279072 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbrlq\" (UniqueName: \"kubernetes.io/projected/96078cd8-6689-413e-ab9c-1ecefd96e3d5-kube-api-access-rbrlq\") pod \"redhat-operators-lc452\" (UID: \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\") " pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.279494 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96078cd8-6689-413e-ab9c-1ecefd96e3d5-utilities\") pod \"redhat-operators-lc452\" (UID: \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\") " pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.279596 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96078cd8-6689-413e-ab9c-1ecefd96e3d5-catalog-content\") pod \"redhat-operators-lc452\" (UID: \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\") " pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.279975 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96078cd8-6689-413e-ab9c-1ecefd96e3d5-utilities\") pod \"redhat-operators-lc452\" (UID: \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\") " pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.280068 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96078cd8-6689-413e-ab9c-1ecefd96e3d5-catalog-content\") pod \"redhat-operators-lc452\" (UID: \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\") " pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.299132 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbrlq\" (UniqueName: \"kubernetes.io/projected/96078cd8-6689-413e-ab9c-1ecefd96e3d5-kube-api-access-rbrlq\") pod \"redhat-operators-lc452\" (UID: \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\") " pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.325693 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:26 crc kubenswrapper[4877]: I1211 19:11:26.788619 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lc452"] Dec 11 19:11:27 crc kubenswrapper[4877]: I1211 19:11:27.495656 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lc452" event={"ID":"96078cd8-6689-413e-ab9c-1ecefd96e3d5","Type":"ContainerStarted","Data":"3a45e106901217ff766ad7229377b338f6fa44a8a9fac4c3f6718f104be7e10e"} Dec 11 19:11:27 crc kubenswrapper[4877]: I1211 19:11:27.496723 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lc452" event={"ID":"96078cd8-6689-413e-ab9c-1ecefd96e3d5","Type":"ContainerStarted","Data":"727f82ca07198eee2666aa38cead7d54da27db2a05bacb2d99b661e9d208a776"} Dec 11 19:11:27 crc kubenswrapper[4877]: I1211 19:11:27.497570 4877 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 19:11:28 crc kubenswrapper[4877]: I1211 19:11:28.506146 4877 generic.go:334] "Generic (PLEG): container finished" podID="96078cd8-6689-413e-ab9c-1ecefd96e3d5" containerID="3a45e106901217ff766ad7229377b338f6fa44a8a9fac4c3f6718f104be7e10e" exitCode=0 Dec 11 19:11:28 crc kubenswrapper[4877]: I1211 19:11:28.506309 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lc452" event={"ID":"96078cd8-6689-413e-ab9c-1ecefd96e3d5","Type":"ContainerDied","Data":"3a45e106901217ff766ad7229377b338f6fa44a8a9fac4c3f6718f104be7e10e"} Dec 11 19:11:28 crc kubenswrapper[4877]: I1211 19:11:28.506555 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lc452" event={"ID":"96078cd8-6689-413e-ab9c-1ecefd96e3d5","Type":"ContainerStarted","Data":"700c6aff710400b49fb281d5d6e2b5fd521a7c931010d586bb40f64fa838de0b"} Dec 11 19:11:31 crc kubenswrapper[4877]: I1211 19:11:31.535601 4877 generic.go:334] "Generic (PLEG): container finished" podID="96078cd8-6689-413e-ab9c-1ecefd96e3d5" containerID="700c6aff710400b49fb281d5d6e2b5fd521a7c931010d586bb40f64fa838de0b" exitCode=0 Dec 11 19:11:31 crc kubenswrapper[4877]: I1211 19:11:31.535674 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lc452" event={"ID":"96078cd8-6689-413e-ab9c-1ecefd96e3d5","Type":"ContainerDied","Data":"700c6aff710400b49fb281d5d6e2b5fd521a7c931010d586bb40f64fa838de0b"} Dec 11 19:11:33 crc kubenswrapper[4877]: I1211 19:11:33.590472 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lc452" event={"ID":"96078cd8-6689-413e-ab9c-1ecefd96e3d5","Type":"ContainerStarted","Data":"a7719d1075581945dc81db47b4f5a9fd62faddff131824c5ab10ed1c508f3680"} Dec 11 19:11:36 crc kubenswrapper[4877]: I1211 19:11:36.215955 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:11:36 crc kubenswrapper[4877]: E1211 19:11:36.216933 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:11:36 crc kubenswrapper[4877]: I1211 19:11:36.326702 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:36 crc kubenswrapper[4877]: I1211 19:11:36.326749 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:37 crc kubenswrapper[4877]: I1211 19:11:37.397914 4877 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lc452" podUID="96078cd8-6689-413e-ab9c-1ecefd96e3d5" containerName="registry-server" probeResult="failure" output=< Dec 11 19:11:37 crc kubenswrapper[4877]: timeout: failed to connect service ":50051" within 1s Dec 11 19:11:37 crc kubenswrapper[4877]: > Dec 11 19:11:42 crc kubenswrapper[4877]: E1211 19:11:42.947642 4877 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.103:59592->38.102.83.103:45159: write tcp 38.102.83.103:59592->38.102.83.103:45159: write: broken pipe Dec 11 19:11:46 crc kubenswrapper[4877]: I1211 19:11:46.372903 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:46 crc kubenswrapper[4877]: I1211 19:11:46.389559 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lc452" podStartSLOduration=16.347813967 podStartE2EDuration="21.389545449s" podCreationTimestamp="2025-12-11 19:11:25 +0000 UTC" firstStartedPulling="2025-12-11 19:11:27.497327501 +0000 UTC m=+4248.523571545" lastFinishedPulling="2025-12-11 19:11:32.539058983 +0000 UTC m=+4253.565303027" observedRunningTime="2025-12-11 19:11:33.613578109 +0000 UTC m=+4254.639822163" watchObservedRunningTime="2025-12-11 19:11:46.389545449 +0000 UTC m=+4267.415789493" Dec 11 19:11:46 crc kubenswrapper[4877]: I1211 19:11:46.429717 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:46 crc kubenswrapper[4877]: I1211 19:11:46.607985 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lc452"] Dec 11 19:11:46 crc kubenswrapper[4877]: I1211 19:11:46.637581 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 19:11:46 crc kubenswrapper[4877]: I1211 19:11:46.637649 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 19:11:46 crc kubenswrapper[4877]: I1211 19:11:46.637702 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 19:11:46 crc kubenswrapper[4877]: I1211 19:11:46.638496 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"149237b968b52dd9cf7c2aa323775e7ffc4c12da161e6debfd35ffc67dcfc7b4"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 19:11:46 crc kubenswrapper[4877]: I1211 19:11:46.638585 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://149237b968b52dd9cf7c2aa323775e7ffc4c12da161e6debfd35ffc67dcfc7b4" gracePeriod=600 Dec 11 19:11:47 crc kubenswrapper[4877]: I1211 19:11:47.714829 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="149237b968b52dd9cf7c2aa323775e7ffc4c12da161e6debfd35ffc67dcfc7b4" exitCode=0 Dec 11 19:11:47 crc kubenswrapper[4877]: I1211 19:11:47.714903 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"149237b968b52dd9cf7c2aa323775e7ffc4c12da161e6debfd35ffc67dcfc7b4"} Dec 11 19:11:47 crc kubenswrapper[4877]: I1211 19:11:47.715312 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerStarted","Data":"38a2057d90177cb419df5c073921b4f49bef199f69117d07b64bbb7d771699d0"} Dec 11 19:11:47 crc kubenswrapper[4877]: I1211 19:11:47.715355 4877 scope.go:117] "RemoveContainer" containerID="0ef93a25b99db259332b3b42cddb370e76153df4a8a7ce58dbd0bc066d87ab41" Dec 11 19:11:47 crc kubenswrapper[4877]: I1211 19:11:47.717063 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lc452" podUID="96078cd8-6689-413e-ab9c-1ecefd96e3d5" containerName="registry-server" containerID="cri-o://a7719d1075581945dc81db47b4f5a9fd62faddff131824c5ab10ed1c508f3680" gracePeriod=2 Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.369753 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.419882 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbrlq\" (UniqueName: \"kubernetes.io/projected/96078cd8-6689-413e-ab9c-1ecefd96e3d5-kube-api-access-rbrlq\") pod \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\" (UID: \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\") " Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.420077 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96078cd8-6689-413e-ab9c-1ecefd96e3d5-utilities\") pod \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\" (UID: \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\") " Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.420175 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96078cd8-6689-413e-ab9c-1ecefd96e3d5-catalog-content\") pod \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\" (UID: \"96078cd8-6689-413e-ab9c-1ecefd96e3d5\") " Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.420842 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96078cd8-6689-413e-ab9c-1ecefd96e3d5-utilities" (OuterVolumeSpecName: "utilities") pod "96078cd8-6689-413e-ab9c-1ecefd96e3d5" (UID: "96078cd8-6689-413e-ab9c-1ecefd96e3d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.426265 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96078cd8-6689-413e-ab9c-1ecefd96e3d5-kube-api-access-rbrlq" (OuterVolumeSpecName: "kube-api-access-rbrlq") pod "96078cd8-6689-413e-ab9c-1ecefd96e3d5" (UID: "96078cd8-6689-413e-ab9c-1ecefd96e3d5"). InnerVolumeSpecName "kube-api-access-rbrlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.526568 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbrlq\" (UniqueName: \"kubernetes.io/projected/96078cd8-6689-413e-ab9c-1ecefd96e3d5-kube-api-access-rbrlq\") on node \"crc\" DevicePath \"\"" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.526600 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96078cd8-6689-413e-ab9c-1ecefd96e3d5-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.574914 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96078cd8-6689-413e-ab9c-1ecefd96e3d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96078cd8-6689-413e-ab9c-1ecefd96e3d5" (UID: "96078cd8-6689-413e-ab9c-1ecefd96e3d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.628216 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96078cd8-6689-413e-ab9c-1ecefd96e3d5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.731539 4877 generic.go:334] "Generic (PLEG): container finished" podID="96078cd8-6689-413e-ab9c-1ecefd96e3d5" containerID="a7719d1075581945dc81db47b4f5a9fd62faddff131824c5ab10ed1c508f3680" exitCode=0 Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.731596 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lc452" event={"ID":"96078cd8-6689-413e-ab9c-1ecefd96e3d5","Type":"ContainerDied","Data":"a7719d1075581945dc81db47b4f5a9fd62faddff131824c5ab10ed1c508f3680"} Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.731633 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lc452" event={"ID":"96078cd8-6689-413e-ab9c-1ecefd96e3d5","Type":"ContainerDied","Data":"727f82ca07198eee2666aa38cead7d54da27db2a05bacb2d99b661e9d208a776"} Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.731660 4877 scope.go:117] "RemoveContainer" containerID="a7719d1075581945dc81db47b4f5a9fd62faddff131824c5ab10ed1c508f3680" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.731872 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lc452" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.752670 4877 scope.go:117] "RemoveContainer" containerID="700c6aff710400b49fb281d5d6e2b5fd521a7c931010d586bb40f64fa838de0b" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.777809 4877 scope.go:117] "RemoveContainer" containerID="3a45e106901217ff766ad7229377b338f6fa44a8a9fac4c3f6718f104be7e10e" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.784949 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lc452"] Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.799070 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lc452"] Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.810857 4877 scope.go:117] "RemoveContainer" containerID="a7719d1075581945dc81db47b4f5a9fd62faddff131824c5ab10ed1c508f3680" Dec 11 19:11:48 crc kubenswrapper[4877]: E1211 19:11:48.811352 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7719d1075581945dc81db47b4f5a9fd62faddff131824c5ab10ed1c508f3680\": container with ID starting with a7719d1075581945dc81db47b4f5a9fd62faddff131824c5ab10ed1c508f3680 not found: ID does not exist" containerID="a7719d1075581945dc81db47b4f5a9fd62faddff131824c5ab10ed1c508f3680" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.811426 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7719d1075581945dc81db47b4f5a9fd62faddff131824c5ab10ed1c508f3680"} err="failed to get container status \"a7719d1075581945dc81db47b4f5a9fd62faddff131824c5ab10ed1c508f3680\": rpc error: code = NotFound desc = could not find container \"a7719d1075581945dc81db47b4f5a9fd62faddff131824c5ab10ed1c508f3680\": container with ID starting with a7719d1075581945dc81db47b4f5a9fd62faddff131824c5ab10ed1c508f3680 not found: ID does not exist" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.811458 4877 scope.go:117] "RemoveContainer" containerID="700c6aff710400b49fb281d5d6e2b5fd521a7c931010d586bb40f64fa838de0b" Dec 11 19:11:48 crc kubenswrapper[4877]: E1211 19:11:48.811869 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"700c6aff710400b49fb281d5d6e2b5fd521a7c931010d586bb40f64fa838de0b\": container with ID starting with 700c6aff710400b49fb281d5d6e2b5fd521a7c931010d586bb40f64fa838de0b not found: ID does not exist" containerID="700c6aff710400b49fb281d5d6e2b5fd521a7c931010d586bb40f64fa838de0b" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.811916 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"700c6aff710400b49fb281d5d6e2b5fd521a7c931010d586bb40f64fa838de0b"} err="failed to get container status \"700c6aff710400b49fb281d5d6e2b5fd521a7c931010d586bb40f64fa838de0b\": rpc error: code = NotFound desc = could not find container \"700c6aff710400b49fb281d5d6e2b5fd521a7c931010d586bb40f64fa838de0b\": container with ID starting with 700c6aff710400b49fb281d5d6e2b5fd521a7c931010d586bb40f64fa838de0b not found: ID does not exist" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.811943 4877 scope.go:117] "RemoveContainer" containerID="3a45e106901217ff766ad7229377b338f6fa44a8a9fac4c3f6718f104be7e10e" Dec 11 19:11:48 crc kubenswrapper[4877]: E1211 19:11:48.812195 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a45e106901217ff766ad7229377b338f6fa44a8a9fac4c3f6718f104be7e10e\": container with ID starting with 3a45e106901217ff766ad7229377b338f6fa44a8a9fac4c3f6718f104be7e10e not found: ID does not exist" containerID="3a45e106901217ff766ad7229377b338f6fa44a8a9fac4c3f6718f104be7e10e" Dec 11 19:11:48 crc kubenswrapper[4877]: I1211 19:11:48.812224 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a45e106901217ff766ad7229377b338f6fa44a8a9fac4c3f6718f104be7e10e"} err="failed to get container status \"3a45e106901217ff766ad7229377b338f6fa44a8a9fac4c3f6718f104be7e10e\": rpc error: code = NotFound desc = could not find container \"3a45e106901217ff766ad7229377b338f6fa44a8a9fac4c3f6718f104be7e10e\": container with ID starting with 3a45e106901217ff766ad7229377b338f6fa44a8a9fac4c3f6718f104be7e10e not found: ID does not exist" Dec 11 19:11:49 crc kubenswrapper[4877]: I1211 19:11:49.235559 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96078cd8-6689-413e-ab9c-1ecefd96e3d5" path="/var/lib/kubelet/pods/96078cd8-6689-413e-ab9c-1ecefd96e3d5/volumes" Dec 11 19:11:51 crc kubenswrapper[4877]: I1211 19:11:51.215747 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:11:51 crc kubenswrapper[4877]: E1211 19:11:51.216567 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:12:03 crc kubenswrapper[4877]: I1211 19:12:03.216124 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:12:03 crc kubenswrapper[4877]: E1211 19:12:03.217024 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:12:15 crc kubenswrapper[4877]: I1211 19:12:15.218450 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:12:15 crc kubenswrapper[4877]: E1211 19:12:15.220404 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:12:20 crc kubenswrapper[4877]: I1211 19:12:20.944465 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6dfpq"] Dec 11 19:12:20 crc kubenswrapper[4877]: E1211 19:12:20.945486 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96078cd8-6689-413e-ab9c-1ecefd96e3d5" containerName="registry-server" Dec 11 19:12:20 crc kubenswrapper[4877]: I1211 19:12:20.945502 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="96078cd8-6689-413e-ab9c-1ecefd96e3d5" containerName="registry-server" Dec 11 19:12:20 crc kubenswrapper[4877]: E1211 19:12:20.945515 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96078cd8-6689-413e-ab9c-1ecefd96e3d5" containerName="extract-utilities" Dec 11 19:12:20 crc kubenswrapper[4877]: I1211 19:12:20.945523 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="96078cd8-6689-413e-ab9c-1ecefd96e3d5" containerName="extract-utilities" Dec 11 19:12:20 crc kubenswrapper[4877]: E1211 19:12:20.945550 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96078cd8-6689-413e-ab9c-1ecefd96e3d5" containerName="extract-content" Dec 11 19:12:20 crc kubenswrapper[4877]: I1211 19:12:20.945559 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="96078cd8-6689-413e-ab9c-1ecefd96e3d5" containerName="extract-content" Dec 11 19:12:20 crc kubenswrapper[4877]: I1211 19:12:20.945773 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="96078cd8-6689-413e-ab9c-1ecefd96e3d5" containerName="registry-server" Dec 11 19:12:20 crc kubenswrapper[4877]: I1211 19:12:20.947479 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:20 crc kubenswrapper[4877]: I1211 19:12:20.963925 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dfpq"] Dec 11 19:12:21 crc kubenswrapper[4877]: I1211 19:12:21.106862 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbdz2\" (UniqueName: \"kubernetes.io/projected/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-kube-api-access-mbdz2\") pod \"redhat-marketplace-6dfpq\" (UID: \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\") " pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:21 crc kubenswrapper[4877]: I1211 19:12:21.106957 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-catalog-content\") pod \"redhat-marketplace-6dfpq\" (UID: \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\") " pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:21 crc kubenswrapper[4877]: I1211 19:12:21.106997 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-utilities\") pod \"redhat-marketplace-6dfpq\" (UID: \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\") " pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:21 crc kubenswrapper[4877]: I1211 19:12:21.208897 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbdz2\" (UniqueName: \"kubernetes.io/projected/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-kube-api-access-mbdz2\") pod \"redhat-marketplace-6dfpq\" (UID: \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\") " pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:21 crc kubenswrapper[4877]: I1211 19:12:21.209017 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-catalog-content\") pod \"redhat-marketplace-6dfpq\" (UID: \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\") " pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:21 crc kubenswrapper[4877]: I1211 19:12:21.209071 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-utilities\") pod \"redhat-marketplace-6dfpq\" (UID: \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\") " pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:21 crc kubenswrapper[4877]: I1211 19:12:21.209761 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-utilities\") pod \"redhat-marketplace-6dfpq\" (UID: \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\") " pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:21 crc kubenswrapper[4877]: I1211 19:12:21.210398 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-catalog-content\") pod \"redhat-marketplace-6dfpq\" (UID: \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\") " pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:21 crc kubenswrapper[4877]: I1211 19:12:21.246748 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbdz2\" (UniqueName: \"kubernetes.io/projected/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-kube-api-access-mbdz2\") pod \"redhat-marketplace-6dfpq\" (UID: \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\") " pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:21 crc kubenswrapper[4877]: I1211 19:12:21.330766 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:21 crc kubenswrapper[4877]: I1211 19:12:21.808850 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dfpq"] Dec 11 19:12:22 crc kubenswrapper[4877]: I1211 19:12:22.115144 4877 generic.go:334] "Generic (PLEG): container finished" podID="06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" containerID="6ac83f16e8b40edfce3c31be5a40c9de46dd312cd322664df72b10caa22a86d5" exitCode=0 Dec 11 19:12:22 crc kubenswrapper[4877]: I1211 19:12:22.115207 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dfpq" event={"ID":"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5","Type":"ContainerDied","Data":"6ac83f16e8b40edfce3c31be5a40c9de46dd312cd322664df72b10caa22a86d5"} Dec 11 19:12:22 crc kubenswrapper[4877]: I1211 19:12:22.115233 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dfpq" event={"ID":"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5","Type":"ContainerStarted","Data":"dedfef6127f642000418fc1fb32963c51dfb63775b0a4d1bd9289aa13e04559a"} Dec 11 19:12:24 crc kubenswrapper[4877]: I1211 19:12:24.142232 4877 generic.go:334] "Generic (PLEG): container finished" podID="06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" containerID="0d8900ea22c6eca37307c1a255ac8053440cfd7a5ec564239f751e9e06b87799" exitCode=0 Dec 11 19:12:24 crc kubenswrapper[4877]: I1211 19:12:24.142286 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dfpq" event={"ID":"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5","Type":"ContainerDied","Data":"0d8900ea22c6eca37307c1a255ac8053440cfd7a5ec564239f751e9e06b87799"} Dec 11 19:12:26 crc kubenswrapper[4877]: I1211 19:12:26.185111 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dfpq" event={"ID":"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5","Type":"ContainerStarted","Data":"26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733"} Dec 11 19:12:26 crc kubenswrapper[4877]: I1211 19:12:26.215190 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6dfpq" podStartSLOduration=3.606820619 podStartE2EDuration="6.215174951s" podCreationTimestamp="2025-12-11 19:12:20 +0000 UTC" firstStartedPulling="2025-12-11 19:12:22.117769826 +0000 UTC m=+4303.144013890" lastFinishedPulling="2025-12-11 19:12:24.726124168 +0000 UTC m=+4305.752368222" observedRunningTime="2025-12-11 19:12:26.210454805 +0000 UTC m=+4307.236698859" watchObservedRunningTime="2025-12-11 19:12:26.215174951 +0000 UTC m=+4307.241418985" Dec 11 19:12:28 crc kubenswrapper[4877]: I1211 19:12:28.215706 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:12:28 crc kubenswrapper[4877]: E1211 19:12:28.216497 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:12:31 crc kubenswrapper[4877]: I1211 19:12:31.331333 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:31 crc kubenswrapper[4877]: I1211 19:12:31.332185 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:31 crc kubenswrapper[4877]: I1211 19:12:31.404363 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:32 crc kubenswrapper[4877]: I1211 19:12:32.344010 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:32 crc kubenswrapper[4877]: I1211 19:12:32.419219 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dfpq"] Dec 11 19:12:34 crc kubenswrapper[4877]: I1211 19:12:34.297127 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6dfpq" podUID="06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" containerName="registry-server" containerID="cri-o://26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733" gracePeriod=2 Dec 11 19:12:34 crc kubenswrapper[4877]: E1211 19:12:34.524836 4877 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06fcd5a6_2f50_4b34_8c61_66a8f318b8d5.slice/crio-26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06fcd5a6_2f50_4b34_8c61_66a8f318b8d5.slice/crio-conmon-26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733.scope\": RecentStats: unable to find data in memory cache]" Dec 11 19:12:34 crc kubenswrapper[4877]: I1211 19:12:34.884162 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.028697 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbdz2\" (UniqueName: \"kubernetes.io/projected/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-kube-api-access-mbdz2\") pod \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\" (UID: \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\") " Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.028914 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-catalog-content\") pod \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\" (UID: \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\") " Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.029026 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-utilities\") pod \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\" (UID: \"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5\") " Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.030206 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-utilities" (OuterVolumeSpecName: "utilities") pod "06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" (UID: "06fcd5a6-2f50-4b34-8c61-66a8f318b8d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.037333 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-kube-api-access-mbdz2" (OuterVolumeSpecName: "kube-api-access-mbdz2") pod "06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" (UID: "06fcd5a6-2f50-4b34-8c61-66a8f318b8d5"). InnerVolumeSpecName "kube-api-access-mbdz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.053727 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" (UID: "06fcd5a6-2f50-4b34-8c61-66a8f318b8d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.131328 4877 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.131356 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbdz2\" (UniqueName: \"kubernetes.io/projected/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-kube-api-access-mbdz2\") on node \"crc\" DevicePath \"\"" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.131366 4877 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.308571 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dfpq" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.308643 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dfpq" event={"ID":"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5","Type":"ContainerDied","Data":"26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733"} Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.308703 4877 scope.go:117] "RemoveContainer" containerID="26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.309868 4877 generic.go:334] "Generic (PLEG): container finished" podID="06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" containerID="26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733" exitCode=0 Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.310356 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dfpq" event={"ID":"06fcd5a6-2f50-4b34-8c61-66a8f318b8d5","Type":"ContainerDied","Data":"dedfef6127f642000418fc1fb32963c51dfb63775b0a4d1bd9289aa13e04559a"} Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.348527 4877 scope.go:117] "RemoveContainer" containerID="0d8900ea22c6eca37307c1a255ac8053440cfd7a5ec564239f751e9e06b87799" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.373994 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dfpq"] Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.381585 4877 scope.go:117] "RemoveContainer" containerID="6ac83f16e8b40edfce3c31be5a40c9de46dd312cd322664df72b10caa22a86d5" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.390845 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dfpq"] Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.454057 4877 scope.go:117] "RemoveContainer" containerID="26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733" Dec 11 19:12:35 crc kubenswrapper[4877]: E1211 19:12:35.455001 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733\": container with ID starting with 26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733 not found: ID does not exist" containerID="26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.455071 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733"} err="failed to get container status \"26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733\": rpc error: code = NotFound desc = could not find container \"26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733\": container with ID starting with 26d2a9bd0bfbca92cd9e4e919ca4260171a3f74c3907e1233c55070d0f0d5733 not found: ID does not exist" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.455117 4877 scope.go:117] "RemoveContainer" containerID="0d8900ea22c6eca37307c1a255ac8053440cfd7a5ec564239f751e9e06b87799" Dec 11 19:12:35 crc kubenswrapper[4877]: E1211 19:12:35.455816 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d8900ea22c6eca37307c1a255ac8053440cfd7a5ec564239f751e9e06b87799\": container with ID starting with 0d8900ea22c6eca37307c1a255ac8053440cfd7a5ec564239f751e9e06b87799 not found: ID does not exist" containerID="0d8900ea22c6eca37307c1a255ac8053440cfd7a5ec564239f751e9e06b87799" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.455860 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d8900ea22c6eca37307c1a255ac8053440cfd7a5ec564239f751e9e06b87799"} err="failed to get container status \"0d8900ea22c6eca37307c1a255ac8053440cfd7a5ec564239f751e9e06b87799\": rpc error: code = NotFound desc = could not find container \"0d8900ea22c6eca37307c1a255ac8053440cfd7a5ec564239f751e9e06b87799\": container with ID starting with 0d8900ea22c6eca37307c1a255ac8053440cfd7a5ec564239f751e9e06b87799 not found: ID does not exist" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.455894 4877 scope.go:117] "RemoveContainer" containerID="6ac83f16e8b40edfce3c31be5a40c9de46dd312cd322664df72b10caa22a86d5" Dec 11 19:12:35 crc kubenswrapper[4877]: E1211 19:12:35.456398 4877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ac83f16e8b40edfce3c31be5a40c9de46dd312cd322664df72b10caa22a86d5\": container with ID starting with 6ac83f16e8b40edfce3c31be5a40c9de46dd312cd322664df72b10caa22a86d5 not found: ID does not exist" containerID="6ac83f16e8b40edfce3c31be5a40c9de46dd312cd322664df72b10caa22a86d5" Dec 11 19:12:35 crc kubenswrapper[4877]: I1211 19:12:35.456446 4877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac83f16e8b40edfce3c31be5a40c9de46dd312cd322664df72b10caa22a86d5"} err="failed to get container status \"6ac83f16e8b40edfce3c31be5a40c9de46dd312cd322664df72b10caa22a86d5\": rpc error: code = NotFound desc = could not find container \"6ac83f16e8b40edfce3c31be5a40c9de46dd312cd322664df72b10caa22a86d5\": container with ID starting with 6ac83f16e8b40edfce3c31be5a40c9de46dd312cd322664df72b10caa22a86d5 not found: ID does not exist" Dec 11 19:12:37 crc kubenswrapper[4877]: I1211 19:12:37.239492 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" path="/var/lib/kubelet/pods/06fcd5a6-2f50-4b34-8c61-66a8f318b8d5/volumes" Dec 11 19:12:39 crc kubenswrapper[4877]: I1211 19:12:39.236828 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:12:39 crc kubenswrapper[4877]: E1211 19:12:39.237719 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:12:51 crc kubenswrapper[4877]: I1211 19:12:51.216207 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:12:51 crc kubenswrapper[4877]: E1211 19:12:51.217305 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:12:53 crc kubenswrapper[4877]: I1211 19:12:53.533326 4877 generic.go:334] "Generic (PLEG): container finished" podID="2eddb64a-1239-4517-816d-1090f5a55755" containerID="738a844403ccb011095a19bcbafa4e60dc410812f4c0049381dd32283ac0a076" exitCode=0 Dec 11 19:12:53 crc kubenswrapper[4877]: I1211 19:12:53.533421 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dkpg7/must-gather-bhfm9" event={"ID":"2eddb64a-1239-4517-816d-1090f5a55755","Type":"ContainerDied","Data":"738a844403ccb011095a19bcbafa4e60dc410812f4c0049381dd32283ac0a076"} Dec 11 19:12:53 crc kubenswrapper[4877]: I1211 19:12:53.534240 4877 scope.go:117] "RemoveContainer" containerID="738a844403ccb011095a19bcbafa4e60dc410812f4c0049381dd32283ac0a076" Dec 11 19:12:53 crc kubenswrapper[4877]: I1211 19:12:53.725224 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dkpg7_must-gather-bhfm9_2eddb64a-1239-4517-816d-1090f5a55755/gather/0.log" Dec 11 19:13:03 crc kubenswrapper[4877]: I1211 19:13:03.215670 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:13:03 crc kubenswrapper[4877]: E1211 19:13:03.216882 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:13:03 crc kubenswrapper[4877]: I1211 19:13:03.251519 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dkpg7/must-gather-bhfm9"] Dec 11 19:13:03 crc kubenswrapper[4877]: I1211 19:13:03.251799 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-dkpg7/must-gather-bhfm9" podUID="2eddb64a-1239-4517-816d-1090f5a55755" containerName="copy" containerID="cri-o://423dc9db7ee3868a39f7fcaee5eb1f474edac46e9adf0c3c802357a141b565dd" gracePeriod=2 Dec 11 19:13:03 crc kubenswrapper[4877]: I1211 19:13:03.261243 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dkpg7/must-gather-bhfm9"] Dec 11 19:13:03 crc kubenswrapper[4877]: I1211 19:13:03.665804 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dkpg7_must-gather-bhfm9_2eddb64a-1239-4517-816d-1090f5a55755/copy/0.log" Dec 11 19:13:03 crc kubenswrapper[4877]: I1211 19:13:03.666833 4877 generic.go:334] "Generic (PLEG): container finished" podID="2eddb64a-1239-4517-816d-1090f5a55755" containerID="423dc9db7ee3868a39f7fcaee5eb1f474edac46e9adf0c3c802357a141b565dd" exitCode=143 Dec 11 19:13:03 crc kubenswrapper[4877]: I1211 19:13:03.666881 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="419036c2a869d6459616d2b62c2e3d50c8bb2d7984cc136d454788b1c443d9ac" Dec 11 19:13:03 crc kubenswrapper[4877]: I1211 19:13:03.746228 4877 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dkpg7_must-gather-bhfm9_2eddb64a-1239-4517-816d-1090f5a55755/copy/0.log" Dec 11 19:13:03 crc kubenswrapper[4877]: I1211 19:13:03.746635 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/must-gather-bhfm9" Dec 11 19:13:03 crc kubenswrapper[4877]: I1211 19:13:03.916585 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9b6q\" (UniqueName: \"kubernetes.io/projected/2eddb64a-1239-4517-816d-1090f5a55755-kube-api-access-t9b6q\") pod \"2eddb64a-1239-4517-816d-1090f5a55755\" (UID: \"2eddb64a-1239-4517-816d-1090f5a55755\") " Dec 11 19:13:03 crc kubenswrapper[4877]: I1211 19:13:03.916661 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2eddb64a-1239-4517-816d-1090f5a55755-must-gather-output\") pod \"2eddb64a-1239-4517-816d-1090f5a55755\" (UID: \"2eddb64a-1239-4517-816d-1090f5a55755\") " Dec 11 19:13:03 crc kubenswrapper[4877]: I1211 19:13:03.931892 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eddb64a-1239-4517-816d-1090f5a55755-kube-api-access-t9b6q" (OuterVolumeSpecName: "kube-api-access-t9b6q") pod "2eddb64a-1239-4517-816d-1090f5a55755" (UID: "2eddb64a-1239-4517-816d-1090f5a55755"). InnerVolumeSpecName "kube-api-access-t9b6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:13:04 crc kubenswrapper[4877]: I1211 19:13:04.019706 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9b6q\" (UniqueName: \"kubernetes.io/projected/2eddb64a-1239-4517-816d-1090f5a55755-kube-api-access-t9b6q\") on node \"crc\" DevicePath \"\"" Dec 11 19:13:04 crc kubenswrapper[4877]: I1211 19:13:04.053679 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eddb64a-1239-4517-816d-1090f5a55755-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2eddb64a-1239-4517-816d-1090f5a55755" (UID: "2eddb64a-1239-4517-816d-1090f5a55755"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 19:13:04 crc kubenswrapper[4877]: I1211 19:13:04.121135 4877 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2eddb64a-1239-4517-816d-1090f5a55755-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 11 19:13:04 crc kubenswrapper[4877]: I1211 19:13:04.677235 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dkpg7/must-gather-bhfm9" Dec 11 19:13:05 crc kubenswrapper[4877]: I1211 19:13:05.227889 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eddb64a-1239-4517-816d-1090f5a55755" path="/var/lib/kubelet/pods/2eddb64a-1239-4517-816d-1090f5a55755/volumes" Dec 11 19:13:17 crc kubenswrapper[4877]: I1211 19:13:17.215456 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:13:17 crc kubenswrapper[4877]: E1211 19:13:17.216212 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:13:31 crc kubenswrapper[4877]: I1211 19:13:31.215907 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:13:31 crc kubenswrapper[4877]: E1211 19:13:31.217000 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:13:46 crc kubenswrapper[4877]: I1211 19:13:46.215110 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:13:46 crc kubenswrapper[4877]: E1211 19:13:46.216329 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:13:46 crc kubenswrapper[4877]: I1211 19:13:46.638467 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 19:13:46 crc kubenswrapper[4877]: I1211 19:13:46.638580 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 19:13:47 crc kubenswrapper[4877]: I1211 19:13:47.770780 4877 scope.go:117] "RemoveContainer" containerID="738a844403ccb011095a19bcbafa4e60dc410812f4c0049381dd32283ac0a076" Dec 11 19:13:47 crc kubenswrapper[4877]: I1211 19:13:47.851775 4877 scope.go:117] "RemoveContainer" containerID="423dc9db7ee3868a39f7fcaee5eb1f474edac46e9adf0c3c802357a141b565dd" Dec 11 19:14:00 crc kubenswrapper[4877]: I1211 19:14:00.216133 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:14:00 crc kubenswrapper[4877]: E1211 19:14:00.217240 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:14:11 crc kubenswrapper[4877]: I1211 19:14:11.216082 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:14:11 crc kubenswrapper[4877]: E1211 19:14:11.217182 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:14:16 crc kubenswrapper[4877]: I1211 19:14:16.639033 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 19:14:16 crc kubenswrapper[4877]: I1211 19:14:16.639803 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 19:14:25 crc kubenswrapper[4877]: I1211 19:14:25.216896 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:14:25 crc kubenswrapper[4877]: E1211 19:14:25.218017 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=manager pod=infra-operator-controller-manager-6797f5b887-q9vgk_openstack-operators(53a860ae-4169-4f47-8ba7-032c96b4be3a)\"" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" podUID="53a860ae-4169-4f47-8ba7-032c96b4be3a" Dec 11 19:14:36 crc kubenswrapper[4877]: I1211 19:14:36.215725 4877 scope.go:117] "RemoveContainer" containerID="bd367e02053eac5fe563e6e7a7bd38f7e4cc7c35a43612535472440d7dc009de" Dec 11 19:14:36 crc kubenswrapper[4877]: I1211 19:14:36.693220 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" event={"ID":"53a860ae-4169-4f47-8ba7-032c96b4be3a","Type":"ContainerStarted","Data":"e152daf068411d69dc5d2ac0c046ea6c5c6b5c4ba0c63e8dbd9e7357272da215"} Dec 11 19:14:36 crc kubenswrapper[4877]: I1211 19:14:36.693841 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 19:14:41 crc kubenswrapper[4877]: I1211 19:14:41.145580 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6797f5b887-q9vgk" Dec 11 19:14:46 crc kubenswrapper[4877]: I1211 19:14:46.638101 4877 patch_prober.go:28] interesting pod/machine-config-daemon-sjnxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 19:14:46 crc kubenswrapper[4877]: I1211 19:14:46.638529 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 19:14:46 crc kubenswrapper[4877]: I1211 19:14:46.638591 4877 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" Dec 11 19:14:46 crc kubenswrapper[4877]: I1211 19:14:46.639339 4877 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38a2057d90177cb419df5c073921b4f49bef199f69117d07b64bbb7d771699d0"} pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 19:14:46 crc kubenswrapper[4877]: I1211 19:14:46.639469 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerName="machine-config-daemon" containerID="cri-o://38a2057d90177cb419df5c073921b4f49bef199f69117d07b64bbb7d771699d0" gracePeriod=600 Dec 11 19:14:46 crc kubenswrapper[4877]: E1211 19:14:46.774022 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:14:46 crc kubenswrapper[4877]: I1211 19:14:46.808948 4877 generic.go:334] "Generic (PLEG): container finished" podID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" containerID="38a2057d90177cb419df5c073921b4f49bef199f69117d07b64bbb7d771699d0" exitCode=0 Dec 11 19:14:46 crc kubenswrapper[4877]: I1211 19:14:46.809006 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" event={"ID":"47cbee6c-de7f-4f75-8a7b-6d4e7da6f963","Type":"ContainerDied","Data":"38a2057d90177cb419df5c073921b4f49bef199f69117d07b64bbb7d771699d0"} Dec 11 19:14:46 crc kubenswrapper[4877]: I1211 19:14:46.809058 4877 scope.go:117] "RemoveContainer" containerID="149237b968b52dd9cf7c2aa323775e7ffc4c12da161e6debfd35ffc67dcfc7b4" Dec 11 19:14:46 crc kubenswrapper[4877]: I1211 19:14:46.809777 4877 scope.go:117] "RemoveContainer" containerID="38a2057d90177cb419df5c073921b4f49bef199f69117d07b64bbb7d771699d0" Dec 11 19:14:46 crc kubenswrapper[4877]: E1211 19:14:46.810173 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:14:47 crc kubenswrapper[4877]: I1211 19:14:47.944722 4877 scope.go:117] "RemoveContainer" containerID="69f1b3b4065848b2738c7948de3a466c5cf9b3b81815714061cec919a48809a6" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.201110 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98"] Dec 11 19:15:00 crc kubenswrapper[4877]: E1211 19:15:00.202049 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eddb64a-1239-4517-816d-1090f5a55755" containerName="gather" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.202063 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eddb64a-1239-4517-816d-1090f5a55755" containerName="gather" Dec 11 19:15:00 crc kubenswrapper[4877]: E1211 19:15:00.202076 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eddb64a-1239-4517-816d-1090f5a55755" containerName="copy" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.202082 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eddb64a-1239-4517-816d-1090f5a55755" containerName="copy" Dec 11 19:15:00 crc kubenswrapper[4877]: E1211 19:15:00.202114 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" containerName="extract-utilities" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.202122 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" containerName="extract-utilities" Dec 11 19:15:00 crc kubenswrapper[4877]: E1211 19:15:00.202133 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" containerName="extract-content" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.202140 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" containerName="extract-content" Dec 11 19:15:00 crc kubenswrapper[4877]: E1211 19:15:00.202151 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" containerName="registry-server" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.202158 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" containerName="registry-server" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.202360 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="06fcd5a6-2f50-4b34-8c61-66a8f318b8d5" containerName="registry-server" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.202392 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eddb64a-1239-4517-816d-1090f5a55755" containerName="copy" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.202401 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eddb64a-1239-4517-816d-1090f5a55755" containerName="gather" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.203069 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.205348 4877 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.205561 4877 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.210023 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98"] Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.331877 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbf352d2-d21e-4342-b966-1ceb464f11fd-config-volume\") pod \"collect-profiles-29424675-pvl98\" (UID: \"bbf352d2-d21e-4342-b966-1ceb464f11fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.332267 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2849w\" (UniqueName: \"kubernetes.io/projected/bbf352d2-d21e-4342-b966-1ceb464f11fd-kube-api-access-2849w\") pod \"collect-profiles-29424675-pvl98\" (UID: \"bbf352d2-d21e-4342-b966-1ceb464f11fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.332287 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbf352d2-d21e-4342-b966-1ceb464f11fd-secret-volume\") pod \"collect-profiles-29424675-pvl98\" (UID: \"bbf352d2-d21e-4342-b966-1ceb464f11fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.435119 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbf352d2-d21e-4342-b966-1ceb464f11fd-config-volume\") pod \"collect-profiles-29424675-pvl98\" (UID: \"bbf352d2-d21e-4342-b966-1ceb464f11fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.435247 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2849w\" (UniqueName: \"kubernetes.io/projected/bbf352d2-d21e-4342-b966-1ceb464f11fd-kube-api-access-2849w\") pod \"collect-profiles-29424675-pvl98\" (UID: \"bbf352d2-d21e-4342-b966-1ceb464f11fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.435280 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbf352d2-d21e-4342-b966-1ceb464f11fd-secret-volume\") pod \"collect-profiles-29424675-pvl98\" (UID: \"bbf352d2-d21e-4342-b966-1ceb464f11fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.436992 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbf352d2-d21e-4342-b966-1ceb464f11fd-config-volume\") pod \"collect-profiles-29424675-pvl98\" (UID: \"bbf352d2-d21e-4342-b966-1ceb464f11fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.444634 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbf352d2-d21e-4342-b966-1ceb464f11fd-secret-volume\") pod \"collect-profiles-29424675-pvl98\" (UID: \"bbf352d2-d21e-4342-b966-1ceb464f11fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.455452 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2849w\" (UniqueName: \"kubernetes.io/projected/bbf352d2-d21e-4342-b966-1ceb464f11fd-kube-api-access-2849w\") pod \"collect-profiles-29424675-pvl98\" (UID: \"bbf352d2-d21e-4342-b966-1ceb464f11fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" Dec 11 19:15:00 crc kubenswrapper[4877]: I1211 19:15:00.542152 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" Dec 11 19:15:01 crc kubenswrapper[4877]: I1211 19:15:01.040343 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98"] Dec 11 19:15:01 crc kubenswrapper[4877]: I1211 19:15:01.217281 4877 scope.go:117] "RemoveContainer" containerID="38a2057d90177cb419df5c073921b4f49bef199f69117d07b64bbb7d771699d0" Dec 11 19:15:01 crc kubenswrapper[4877]: E1211 19:15:01.218216 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:15:01 crc kubenswrapper[4877]: I1211 19:15:01.985117 4877 generic.go:334] "Generic (PLEG): container finished" podID="bbf352d2-d21e-4342-b966-1ceb464f11fd" containerID="cdd33becf72e4951fc6ef6213408413a0a14ec41361e6afed6bff668ccbd31e5" exitCode=0 Dec 11 19:15:01 crc kubenswrapper[4877]: I1211 19:15:01.985223 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" event={"ID":"bbf352d2-d21e-4342-b966-1ceb464f11fd","Type":"ContainerDied","Data":"cdd33becf72e4951fc6ef6213408413a0a14ec41361e6afed6bff668ccbd31e5"} Dec 11 19:15:01 crc kubenswrapper[4877]: I1211 19:15:01.985672 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" event={"ID":"bbf352d2-d21e-4342-b966-1ceb464f11fd","Type":"ContainerStarted","Data":"403529dc1bddb30ef6d26eeb2510c6e6ee7b908b5ab9fda70cc6feb472129ff8"} Dec 11 19:15:03 crc kubenswrapper[4877]: I1211 19:15:03.347935 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" Dec 11 19:15:03 crc kubenswrapper[4877]: I1211 19:15:03.398295 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2849w\" (UniqueName: \"kubernetes.io/projected/bbf352d2-d21e-4342-b966-1ceb464f11fd-kube-api-access-2849w\") pod \"bbf352d2-d21e-4342-b966-1ceb464f11fd\" (UID: \"bbf352d2-d21e-4342-b966-1ceb464f11fd\") " Dec 11 19:15:03 crc kubenswrapper[4877]: I1211 19:15:03.398426 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbf352d2-d21e-4342-b966-1ceb464f11fd-config-volume\") pod \"bbf352d2-d21e-4342-b966-1ceb464f11fd\" (UID: \"bbf352d2-d21e-4342-b966-1ceb464f11fd\") " Dec 11 19:15:03 crc kubenswrapper[4877]: I1211 19:15:03.398490 4877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbf352d2-d21e-4342-b966-1ceb464f11fd-secret-volume\") pod \"bbf352d2-d21e-4342-b966-1ceb464f11fd\" (UID: \"bbf352d2-d21e-4342-b966-1ceb464f11fd\") " Dec 11 19:15:03 crc kubenswrapper[4877]: I1211 19:15:03.399831 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbf352d2-d21e-4342-b966-1ceb464f11fd-config-volume" (OuterVolumeSpecName: "config-volume") pod "bbf352d2-d21e-4342-b966-1ceb464f11fd" (UID: "bbf352d2-d21e-4342-b966-1ceb464f11fd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 19:15:03 crc kubenswrapper[4877]: I1211 19:15:03.405197 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbf352d2-d21e-4342-b966-1ceb464f11fd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bbf352d2-d21e-4342-b966-1ceb464f11fd" (UID: "bbf352d2-d21e-4342-b966-1ceb464f11fd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 19:15:03 crc kubenswrapper[4877]: I1211 19:15:03.405255 4877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbf352d2-d21e-4342-b966-1ceb464f11fd-kube-api-access-2849w" (OuterVolumeSpecName: "kube-api-access-2849w") pod "bbf352d2-d21e-4342-b966-1ceb464f11fd" (UID: "bbf352d2-d21e-4342-b966-1ceb464f11fd"). InnerVolumeSpecName "kube-api-access-2849w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 19:15:03 crc kubenswrapper[4877]: I1211 19:15:03.500724 4877 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2849w\" (UniqueName: \"kubernetes.io/projected/bbf352d2-d21e-4342-b966-1ceb464f11fd-kube-api-access-2849w\") on node \"crc\" DevicePath \"\"" Dec 11 19:15:03 crc kubenswrapper[4877]: I1211 19:15:03.500764 4877 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbf352d2-d21e-4342-b966-1ceb464f11fd-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 19:15:03 crc kubenswrapper[4877]: I1211 19:15:03.500774 4877 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbf352d2-d21e-4342-b966-1ceb464f11fd-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 19:15:04 crc kubenswrapper[4877]: I1211 19:15:04.006147 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" event={"ID":"bbf352d2-d21e-4342-b966-1ceb464f11fd","Type":"ContainerDied","Data":"403529dc1bddb30ef6d26eeb2510c6e6ee7b908b5ab9fda70cc6feb472129ff8"} Dec 11 19:15:04 crc kubenswrapper[4877]: I1211 19:15:04.006192 4877 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="403529dc1bddb30ef6d26eeb2510c6e6ee7b908b5ab9fda70cc6feb472129ff8" Dec 11 19:15:04 crc kubenswrapper[4877]: I1211 19:15:04.006206 4877 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424675-pvl98" Dec 11 19:15:04 crc kubenswrapper[4877]: I1211 19:15:04.440017 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg"] Dec 11 19:15:04 crc kubenswrapper[4877]: I1211 19:15:04.447780 4877 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424630-2qjgg"] Dec 11 19:15:05 crc kubenswrapper[4877]: I1211 19:15:05.232622 4877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2802d435-b434-4dc8-9862-7feaef586d64" path="/var/lib/kubelet/pods/2802d435-b434-4dc8-9862-7feaef586d64/volumes" Dec 11 19:15:12 crc kubenswrapper[4877]: I1211 19:15:12.216228 4877 scope.go:117] "RemoveContainer" containerID="38a2057d90177cb419df5c073921b4f49bef199f69117d07b64bbb7d771699d0" Dec 11 19:15:12 crc kubenswrapper[4877]: E1211 19:15:12.217303 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:15:23 crc kubenswrapper[4877]: I1211 19:15:23.216077 4877 scope.go:117] "RemoveContainer" containerID="38a2057d90177cb419df5c073921b4f49bef199f69117d07b64bbb7d771699d0" Dec 11 19:15:23 crc kubenswrapper[4877]: E1211 19:15:23.217238 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:15:34 crc kubenswrapper[4877]: I1211 19:15:34.215293 4877 scope.go:117] "RemoveContainer" containerID="38a2057d90177cb419df5c073921b4f49bef199f69117d07b64bbb7d771699d0" Dec 11 19:15:34 crc kubenswrapper[4877]: E1211 19:15:34.216220 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:15:48 crc kubenswrapper[4877]: I1211 19:15:48.014777 4877 scope.go:117] "RemoveContainer" containerID="d82bcb1be32826fbb1a4e0abefb835e2a4d40f2d18d8e22106f7c00825a03dbe" Dec 11 19:15:48 crc kubenswrapper[4877]: I1211 19:15:48.238334 4877 scope.go:117] "RemoveContainer" containerID="38a2057d90177cb419df5c073921b4f49bef199f69117d07b64bbb7d771699d0" Dec 11 19:15:48 crc kubenswrapper[4877]: E1211 19:15:48.238737 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.027606 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kfrts"] Dec 11 19:15:52 crc kubenswrapper[4877]: E1211 19:15:52.031774 4877 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf352d2-d21e-4342-b966-1ceb464f11fd" containerName="collect-profiles" Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.031813 4877 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf352d2-d21e-4342-b966-1ceb464f11fd" containerName="collect-profiles" Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.033285 4877 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbf352d2-d21e-4342-b966-1ceb464f11fd" containerName="collect-profiles" Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.037764 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.059840 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kfrts"] Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.127761 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9ba9be-a864-467a-9434-b47a4d57c971-catalog-content\") pod \"community-operators-kfrts\" (UID: \"0f9ba9be-a864-467a-9434-b47a4d57c971\") " pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.128014 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m57rh\" (UniqueName: \"kubernetes.io/projected/0f9ba9be-a864-467a-9434-b47a4d57c971-kube-api-access-m57rh\") pod \"community-operators-kfrts\" (UID: \"0f9ba9be-a864-467a-9434-b47a4d57c971\") " pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.128220 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9ba9be-a864-467a-9434-b47a4d57c971-utilities\") pod \"community-operators-kfrts\" (UID: \"0f9ba9be-a864-467a-9434-b47a4d57c971\") " pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.230540 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9ba9be-a864-467a-9434-b47a4d57c971-utilities\") pod \"community-operators-kfrts\" (UID: \"0f9ba9be-a864-467a-9434-b47a4d57c971\") " pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.230673 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9ba9be-a864-467a-9434-b47a4d57c971-catalog-content\") pod \"community-operators-kfrts\" (UID: \"0f9ba9be-a864-467a-9434-b47a4d57c971\") " pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.230745 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m57rh\" (UniqueName: \"kubernetes.io/projected/0f9ba9be-a864-467a-9434-b47a4d57c971-kube-api-access-m57rh\") pod \"community-operators-kfrts\" (UID: \"0f9ba9be-a864-467a-9434-b47a4d57c971\") " pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.231057 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9ba9be-a864-467a-9434-b47a4d57c971-utilities\") pod \"community-operators-kfrts\" (UID: \"0f9ba9be-a864-467a-9434-b47a4d57c971\") " pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.231138 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9ba9be-a864-467a-9434-b47a4d57c971-catalog-content\") pod \"community-operators-kfrts\" (UID: \"0f9ba9be-a864-467a-9434-b47a4d57c971\") " pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.466271 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m57rh\" (UniqueName: \"kubernetes.io/projected/0f9ba9be-a864-467a-9434-b47a4d57c971-kube-api-access-m57rh\") pod \"community-operators-kfrts\" (UID: \"0f9ba9be-a864-467a-9434-b47a4d57c971\") " pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:15:52 crc kubenswrapper[4877]: I1211 19:15:52.659858 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:15:53 crc kubenswrapper[4877]: I1211 19:15:53.140118 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kfrts"] Dec 11 19:15:53 crc kubenswrapper[4877]: W1211 19:15:53.143713 4877 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f9ba9be_a864_467a_9434_b47a4d57c971.slice/crio-7dbd6c6a39f0ff3f8be844da146486424f07c9d35673d0e10cc85665ced56aea WatchSource:0}: Error finding container 7dbd6c6a39f0ff3f8be844da146486424f07c9d35673d0e10cc85665ced56aea: Status 404 returned error can't find the container with id 7dbd6c6a39f0ff3f8be844da146486424f07c9d35673d0e10cc85665ced56aea Dec 11 19:15:53 crc kubenswrapper[4877]: I1211 19:15:53.595680 4877 generic.go:334] "Generic (PLEG): container finished" podID="0f9ba9be-a864-467a-9434-b47a4d57c971" containerID="c11830bed649118a8a5e50a6ed9b05153d5b49ef7d3098d777b6a8b2662baf5b" exitCode=0 Dec 11 19:15:53 crc kubenswrapper[4877]: I1211 19:15:53.595743 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfrts" event={"ID":"0f9ba9be-a864-467a-9434-b47a4d57c971","Type":"ContainerDied","Data":"c11830bed649118a8a5e50a6ed9b05153d5b49ef7d3098d777b6a8b2662baf5b"} Dec 11 19:15:53 crc kubenswrapper[4877]: I1211 19:15:53.596012 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfrts" event={"ID":"0f9ba9be-a864-467a-9434-b47a4d57c971","Type":"ContainerStarted","Data":"7dbd6c6a39f0ff3f8be844da146486424f07c9d35673d0e10cc85665ced56aea"} Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.630939 4877 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fd88d"] Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.636384 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fd88d" Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.642897 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fd88d"] Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.775482 4877 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="e433a730-179e-4edf-93a9-9468b1714468" containerName="galera" probeResult="failure" output="command timed out" Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.775482 4877 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="e433a730-179e-4edf-93a9-9468b1714468" containerName="galera" probeResult="failure" output="command timed out" Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.785407 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz6jz\" (UniqueName: \"kubernetes.io/projected/8a124acc-b336-462d-bf33-57d0c548fba1-kube-api-access-wz6jz\") pod \"certified-operators-fd88d\" (UID: \"8a124acc-b336-462d-bf33-57d0c548fba1\") " pod="openshift-marketplace/certified-operators-fd88d" Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.785478 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a124acc-b336-462d-bf33-57d0c548fba1-utilities\") pod \"certified-operators-fd88d\" (UID: \"8a124acc-b336-462d-bf33-57d0c548fba1\") " pod="openshift-marketplace/certified-operators-fd88d" Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.785502 4877 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a124acc-b336-462d-bf33-57d0c548fba1-catalog-content\") pod \"certified-operators-fd88d\" (UID: \"8a124acc-b336-462d-bf33-57d0c548fba1\") " pod="openshift-marketplace/certified-operators-fd88d" Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.887600 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz6jz\" (UniqueName: \"kubernetes.io/projected/8a124acc-b336-462d-bf33-57d0c548fba1-kube-api-access-wz6jz\") pod \"certified-operators-fd88d\" (UID: \"8a124acc-b336-462d-bf33-57d0c548fba1\") " pod="openshift-marketplace/certified-operators-fd88d" Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.887698 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a124acc-b336-462d-bf33-57d0c548fba1-utilities\") pod \"certified-operators-fd88d\" (UID: \"8a124acc-b336-462d-bf33-57d0c548fba1\") " pod="openshift-marketplace/certified-operators-fd88d" Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.887731 4877 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a124acc-b336-462d-bf33-57d0c548fba1-catalog-content\") pod \"certified-operators-fd88d\" (UID: \"8a124acc-b336-462d-bf33-57d0c548fba1\") " pod="openshift-marketplace/certified-operators-fd88d" Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.888424 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a124acc-b336-462d-bf33-57d0c548fba1-catalog-content\") pod \"certified-operators-fd88d\" (UID: \"8a124acc-b336-462d-bf33-57d0c548fba1\") " pod="openshift-marketplace/certified-operators-fd88d" Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.888750 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a124acc-b336-462d-bf33-57d0c548fba1-utilities\") pod \"certified-operators-fd88d\" (UID: \"8a124acc-b336-462d-bf33-57d0c548fba1\") " pod="openshift-marketplace/certified-operators-fd88d" Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.908431 4877 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz6jz\" (UniqueName: \"kubernetes.io/projected/8a124acc-b336-462d-bf33-57d0c548fba1-kube-api-access-wz6jz\") pod \"certified-operators-fd88d\" (UID: \"8a124acc-b336-462d-bf33-57d0c548fba1\") " pod="openshift-marketplace/certified-operators-fd88d" Dec 11 19:15:54 crc kubenswrapper[4877]: I1211 19:15:54.979008 4877 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fd88d" Dec 11 19:15:55 crc kubenswrapper[4877]: I1211 19:15:55.587648 4877 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fd88d"] Dec 11 19:15:55 crc kubenswrapper[4877]: I1211 19:15:55.639106 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd88d" event={"ID":"8a124acc-b336-462d-bf33-57d0c548fba1","Type":"ContainerStarted","Data":"8b0243f3766db3d5888d4b925d59822f45ff348d25e6cceef150ed6089d70cf7"} Dec 11 19:15:55 crc kubenswrapper[4877]: I1211 19:15:55.642772 4877 generic.go:334] "Generic (PLEG): container finished" podID="0f9ba9be-a864-467a-9434-b47a4d57c971" containerID="e337e3b44ac12ba7b45bdc48c24e731601856c9e89950afbddf40242e27725f4" exitCode=0 Dec 11 19:15:55 crc kubenswrapper[4877]: I1211 19:15:55.642821 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfrts" event={"ID":"0f9ba9be-a864-467a-9434-b47a4d57c971","Type":"ContainerDied","Data":"e337e3b44ac12ba7b45bdc48c24e731601856c9e89950afbddf40242e27725f4"} Dec 11 19:15:56 crc kubenswrapper[4877]: I1211 19:15:56.657466 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfrts" event={"ID":"0f9ba9be-a864-467a-9434-b47a4d57c971","Type":"ContainerStarted","Data":"0048118f6c7d9caee059a027480fe6c8a3dd709446c59d576fbad2190f64b250"} Dec 11 19:15:56 crc kubenswrapper[4877]: I1211 19:15:56.661056 4877 generic.go:334] "Generic (PLEG): container finished" podID="8a124acc-b336-462d-bf33-57d0c548fba1" containerID="c1538deec3afe3c796b0fc935fb4e599ce9fed3e0b91dfad967615cb6cccbbec" exitCode=0 Dec 11 19:15:56 crc kubenswrapper[4877]: I1211 19:15:56.661093 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd88d" event={"ID":"8a124acc-b336-462d-bf33-57d0c548fba1","Type":"ContainerDied","Data":"c1538deec3afe3c796b0fc935fb4e599ce9fed3e0b91dfad967615cb6cccbbec"} Dec 11 19:15:56 crc kubenswrapper[4877]: I1211 19:15:56.686623 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kfrts" podStartSLOduration=3.153387972 podStartE2EDuration="5.686601069s" podCreationTimestamp="2025-12-11 19:15:51 +0000 UTC" firstStartedPulling="2025-12-11 19:15:53.598324605 +0000 UTC m=+4514.624568689" lastFinishedPulling="2025-12-11 19:15:56.131537702 +0000 UTC m=+4517.157781786" observedRunningTime="2025-12-11 19:15:56.6794924 +0000 UTC m=+4517.705736484" watchObservedRunningTime="2025-12-11 19:15:56.686601069 +0000 UTC m=+4517.712845133" Dec 11 19:15:57 crc kubenswrapper[4877]: I1211 19:15:57.680856 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd88d" event={"ID":"8a124acc-b336-462d-bf33-57d0c548fba1","Type":"ContainerStarted","Data":"def460f5de0da1cc82f8a6fb5fb5adde0a1187cd4098b59996ba766f971b0e6b"} Dec 11 19:15:58 crc kubenswrapper[4877]: I1211 19:15:58.690834 4877 generic.go:334] "Generic (PLEG): container finished" podID="8a124acc-b336-462d-bf33-57d0c548fba1" containerID="def460f5de0da1cc82f8a6fb5fb5adde0a1187cd4098b59996ba766f971b0e6b" exitCode=0 Dec 11 19:15:58 crc kubenswrapper[4877]: I1211 19:15:58.691039 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd88d" event={"ID":"8a124acc-b336-462d-bf33-57d0c548fba1","Type":"ContainerDied","Data":"def460f5de0da1cc82f8a6fb5fb5adde0a1187cd4098b59996ba766f971b0e6b"} Dec 11 19:15:59 crc kubenswrapper[4877]: I1211 19:15:59.707714 4877 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd88d" event={"ID":"8a124acc-b336-462d-bf33-57d0c548fba1","Type":"ContainerStarted","Data":"f20cdc624c178aa00d56b9abddddc70b4029c514be6a863a17aff911cff01c81"} Dec 11 19:15:59 crc kubenswrapper[4877]: I1211 19:15:59.740132 4877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fd88d" podStartSLOduration=3.247724458 podStartE2EDuration="5.740107296s" podCreationTimestamp="2025-12-11 19:15:54 +0000 UTC" firstStartedPulling="2025-12-11 19:15:56.663843572 +0000 UTC m=+4517.690087626" lastFinishedPulling="2025-12-11 19:15:59.15622642 +0000 UTC m=+4520.182470464" observedRunningTime="2025-12-11 19:15:59.733207942 +0000 UTC m=+4520.759452016" watchObservedRunningTime="2025-12-11 19:15:59.740107296 +0000 UTC m=+4520.766351370" Dec 11 19:16:02 crc kubenswrapper[4877]: I1211 19:16:02.215613 4877 scope.go:117] "RemoveContainer" containerID="38a2057d90177cb419df5c073921b4f49bef199f69117d07b64bbb7d771699d0" Dec 11 19:16:02 crc kubenswrapper[4877]: E1211 19:16:02.216796 4877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjnxr_openshift-machine-config-operator(47cbee6c-de7f-4f75-8a7b-6d4e7da6f963)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjnxr" podUID="47cbee6c-de7f-4f75-8a7b-6d4e7da6f963" Dec 11 19:16:02 crc kubenswrapper[4877]: I1211 19:16:02.660703 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:16:02 crc kubenswrapper[4877]: I1211 19:16:02.660890 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:16:02 crc kubenswrapper[4877]: I1211 19:16:02.753839 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:16:03 crc kubenswrapper[4877]: I1211 19:16:03.833818 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kfrts" Dec 11 19:16:03 crc kubenswrapper[4877]: I1211 19:16:03.911163 4877 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kfrts"] Dec 11 19:16:04 crc kubenswrapper[4877]: I1211 19:16:04.980164 4877 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fd88d" Dec 11 19:16:04 crc kubenswrapper[4877]: I1211 19:16:04.980629 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fd88d" Dec 11 19:16:05 crc kubenswrapper[4877]: I1211 19:16:05.032856 4877 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fd88d" Dec 11 19:16:05 crc kubenswrapper[4877]: I1211 19:16:05.773602 4877 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kfrts" podUID="0f9ba9be-a864-467a-9434-b47a4d57c971" containerName="registry-server" containerID="cri-o://0048118f6c7d9caee059a027480fe6c8a3dd709446c59d576fbad2190f64b250" gracePeriod=2